Jan 14 13:25:48.269708 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 14 11:12:50 -00 2026 Jan 14 13:25:48.269739 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=166c426371167f765dd2026937f2932948c99d0fb4a3868a9b09e1eb4ef3a9c9 Jan 14 13:25:48.269750 kernel: BIOS-provided physical RAM map: Jan 14 13:25:48.269762 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:25:48.269770 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 14 13:25:48.269778 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 14 13:25:48.269788 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 14 13:25:48.269798 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 14 13:25:48.269808 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 14 13:25:48.269816 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 14 13:25:48.269824 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 14 13:25:48.269836 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 14 13:25:48.269844 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 14 13:25:48.269853 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 14 13:25:48.269863 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 14 13:25:48.269872 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 14 13:25:48.269884 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 14 13:25:48.269893 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 14 13:25:48.269901 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 14 13:25:48.269910 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 14 13:25:48.269922 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 14 13:25:48.269931 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 14 13:25:48.269940 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 14 13:25:48.269948 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 13:25:48.269957 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 14 13:25:48.269966 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 13:25:48.269978 kernel: NX (Execute Disable) protection: active Jan 14 13:25:48.269986 kernel: APIC: Static calls initialized Jan 14 13:25:48.269995 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 14 13:25:48.270004 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 14 13:25:48.270013 kernel: extended physical RAM map: Jan 14 13:25:48.270022 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:25:48.270033 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 14 13:25:48.270042 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 14 13:25:48.270051 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 14 13:25:48.270060 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 14 13:25:48.270069 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 14 13:25:48.270080 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 14 13:25:48.270089 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 14 13:25:48.270098 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 14 13:25:48.270112 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 14 13:25:48.270124 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 14 13:25:48.270133 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 14 13:25:48.270144 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 14 13:25:48.270155 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 14 13:25:48.270164 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 14 13:25:48.270173 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 14 13:25:48.270183 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 14 13:25:48.270192 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 14 13:25:48.270201 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 14 13:25:48.270214 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 14 13:25:48.270223 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 14 13:25:48.270233 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 14 13:25:48.270242 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 14 13:25:48.270251 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 14 13:25:48.270263 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 13:25:48.270272 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 14 13:25:48.270281 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 13:25:48.270291 kernel: efi: EFI v2.7 by EDK II Jan 14 13:25:48.270300 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 14 13:25:48.270309 kernel: random: crng init done Jan 14 13:25:48.270322 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 14 13:25:48.270647 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 14 13:25:48.270658 kernel: secureboot: Secure boot disabled Jan 14 13:25:48.270667 kernel: SMBIOS 2.8 present. Jan 14 13:25:48.270677 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 14 13:25:48.270686 kernel: DMI: Memory slots populated: 1/1 Jan 14 13:25:48.270695 kernel: Hypervisor detected: KVM Jan 14 13:25:48.270704 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 14 13:25:48.270713 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 13:25:48.270722 kernel: kvm-clock: using sched offset of 12933778130 cycles Jan 14 13:25:48.270735 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 13:25:48.270751 kernel: tsc: Detected 2445.426 MHz processor Jan 14 13:25:48.270761 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:25:48.270771 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:25:48.270781 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 14 13:25:48.270791 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:25:48.270800 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:25:48.270810 kernel: Using GB pages for direct mapping Jan 14 13:25:48.270823 kernel: ACPI: Early table checksum verification disabled Jan 14 13:25:48.270833 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 14 13:25:48.270842 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 14 13:25:48.270853 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:25:48.270865 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:25:48.270875 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 14 13:25:48.270885 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:25:48.270898 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:25:48.270908 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:25:48.270918 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:25:48.270928 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 14 13:25:48.270938 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 14 13:25:48.270947 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 14 13:25:48.270957 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 14 13:25:48.270970 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 14 13:25:48.270983 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 14 13:25:48.270993 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 14 13:25:48.271003 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 14 13:25:48.271012 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 14 13:25:48.271022 kernel: No NUMA configuration found Jan 14 13:25:48.271032 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 14 13:25:48.271045 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 14 13:25:48.271055 kernel: Zone ranges: Jan 14 13:25:48.271065 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:25:48.271074 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 14 13:25:48.271084 kernel: Normal empty Jan 14 13:25:48.271094 kernel: Device empty Jan 14 13:25:48.271106 kernel: Movable zone start for each node Jan 14 13:25:48.271116 kernel: Early memory node ranges Jan 14 13:25:48.271129 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:25:48.271139 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 14 13:25:48.271148 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 14 13:25:48.271158 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 14 13:25:48.271167 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 14 13:25:48.271177 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 14 13:25:48.271186 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 14 13:25:48.271195 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 14 13:25:48.271210 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 14 13:25:48.271222 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:25:48.271242 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:25:48.271254 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 14 13:25:48.271264 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:25:48.271274 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 14 13:25:48.271284 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 14 13:25:48.271294 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 14 13:25:48.271304 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 14 13:25:48.271318 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 14 13:25:48.271645 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 13:25:48.271660 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 13:25:48.271670 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:25:48.271684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 13:25:48.271694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 13:25:48.271704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:25:48.271714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 13:25:48.271726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 13:25:48.271738 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:25:48.271832 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 13:25:48.271848 kernel: TSC deadline timer available Jan 14 13:25:48.271862 kernel: CPU topo: Max. logical packages: 1 Jan 14 13:25:48.271872 kernel: CPU topo: Max. logical dies: 1 Jan 14 13:25:48.271882 kernel: CPU topo: Max. dies per package: 1 Jan 14 13:25:48.271892 kernel: CPU topo: Max. threads per core: 1 Jan 14 13:25:48.271902 kernel: CPU topo: Num. cores per package: 4 Jan 14 13:25:48.271912 kernel: CPU topo: Num. threads per package: 4 Jan 14 13:25:48.271922 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 14 13:25:48.271936 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 13:25:48.271946 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 14 13:25:48.271956 kernel: kvm-guest: setup PV sched yield Jan 14 13:25:48.271966 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 14 13:25:48.271980 kernel: Booting paravirtualized kernel on KVM Jan 14 13:25:48.271990 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:25:48.272000 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 14 13:25:48.272014 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 14 13:25:48.272024 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 14 13:25:48.272034 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 14 13:25:48.272044 kernel: kvm-guest: PV spinlocks enabled Jan 14 13:25:48.272054 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:25:48.272066 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=166c426371167f765dd2026937f2932948c99d0fb4a3868a9b09e1eb4ef3a9c9 Jan 14 13:25:48.272076 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:25:48.272092 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 13:25:48.272104 kernel: Fallback order for Node 0: 0 Jan 14 13:25:48.272114 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 14 13:25:48.272124 kernel: Policy zone: DMA32 Jan 14 13:25:48.272134 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:25:48.272144 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 13:25:48.272154 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 13:25:48.272168 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 13:25:48.272178 kernel: Dynamic Preempt: voluntary Jan 14 13:25:48.272188 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:25:48.272199 kernel: rcu: RCU event tracing is enabled. Jan 14 13:25:48.272213 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 13:25:48.272224 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:25:48.272234 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:25:48.272244 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:25:48.272258 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:25:48.272267 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 13:25:48.272278 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 13:25:48.272288 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 13:25:48.272298 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 13:25:48.272308 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 14 13:25:48.272319 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:25:48.272649 kernel: Console: colour dummy device 80x25 Jan 14 13:25:48.272661 kernel: printk: legacy console [ttyS0] enabled Jan 14 13:25:48.272671 kernel: ACPI: Core revision 20240827 Jan 14 13:25:48.272682 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 13:25:48.272694 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:25:48.272705 kernel: x2apic enabled Jan 14 13:25:48.272716 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 13:25:48.272731 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 14 13:25:48.272741 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 14 13:25:48.272751 kernel: kvm-guest: setup PV IPIs Jan 14 13:25:48.272761 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 13:25:48.272771 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 13:25:48.272782 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 14 13:25:48.272792 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 13:25:48.272805 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 14 13:25:48.272818 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 14 13:25:48.272829 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:25:48.272839 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:25:48.272850 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 13:25:48.272860 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:25:48.272870 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 14 13:25:48.272885 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 14 13:25:48.272895 kernel: active return thunk: srso_alias_return_thunk Jan 14 13:25:48.272905 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 14 13:25:48.272916 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 14 13:25:48.272926 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:25:48.272940 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:25:48.272950 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:25:48.272964 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:25:48.272974 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:25:48.272984 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 14 13:25:48.272995 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:25:48.273005 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:25:48.273015 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 13:25:48.273025 kernel: landlock: Up and running. Jan 14 13:25:48.273038 kernel: SELinux: Initializing. Jan 14 13:25:48.273050 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 13:25:48.273062 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 13:25:48.273073 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 14 13:25:48.273083 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 14 13:25:48.273093 kernel: signal: max sigframe size: 1776 Jan 14 13:25:48.273103 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:25:48.273117 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:25:48.273127 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 13:25:48.273137 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:25:48.273147 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:25:48.273158 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:25:48.273170 kernel: .... node #0, CPUs: #1 #2 #3 Jan 14 13:25:48.273181 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 13:25:48.273195 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 14 13:25:48.273206 kernel: Memory: 2439052K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120812K reserved, 0K cma-reserved) Jan 14 13:25:48.273216 kernel: devtmpfs: initialized Jan 14 13:25:48.273226 kernel: x86/mm: Memory block size: 128MB Jan 14 13:25:48.273237 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 14 13:25:48.273247 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 14 13:25:48.273257 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 14 13:25:48.273271 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 14 13:25:48.273283 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 14 13:25:48.273295 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 14 13:25:48.273306 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:25:48.273316 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 13:25:48.273326 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:25:48.273632 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:25:48.273648 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:25:48.273660 kernel: audit: type=2000 audit(1768397129.419:1): state=initialized audit_enabled=0 res=1 Jan 14 13:25:48.273672 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:25:48.273683 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:25:48.273693 kernel: cpuidle: using governor menu Jan 14 13:25:48.273703 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:25:48.273714 kernel: dca service started, version 1.12.1 Jan 14 13:25:48.273728 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 14 13:25:48.273738 kernel: PCI: Using configuration type 1 for base access Jan 14 13:25:48.273749 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:25:48.273759 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:25:48.273769 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:25:48.273781 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:25:48.273793 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:25:48.273807 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:25:48.273817 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:25:48.273827 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:25:48.273837 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:25:48.273848 kernel: ACPI: Interpreter enabled Jan 14 13:25:48.273858 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 13:25:48.273868 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:25:48.273881 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:25:48.273891 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 13:25:48.273904 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 13:25:48.273915 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 13:25:48.274219 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 13:25:48.274770 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 14 13:25:48.275005 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 14 13:25:48.275021 kernel: PCI host bridge to bus 0000:00 Jan 14 13:25:48.275248 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 13:25:48.275878 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 13:25:48.276092 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 13:25:48.276311 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 14 13:25:48.276856 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 14 13:25:48.277061 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 14 13:25:48.277763 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 13:25:48.278015 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 13:25:48.278247 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 13:25:48.278806 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 14 13:25:48.279026 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 14 13:25:48.279239 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 14 13:25:48.279768 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 13:25:48.279983 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 14648 usecs Jan 14 13:25:48.280211 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 13:25:48.280762 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 14 13:25:48.280985 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 14 13:25:48.281201 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 14 13:25:48.281756 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 13:25:48.281980 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 14 13:25:48.282197 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 14 13:25:48.282815 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 14 13:25:48.283045 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 13:25:48.283263 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 14 13:25:48.283801 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 14 13:25:48.284023 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 14 13:25:48.284247 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 14 13:25:48.284802 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 13:25:48.285024 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 13:25:48.285240 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 19531 usecs Jan 14 13:25:48.285792 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 13:25:48.286012 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 14 13:25:48.286232 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 14 13:25:48.286866 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 13:25:48.287084 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 14 13:25:48.287100 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 13:25:48.287112 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 13:25:48.287126 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 13:25:48.287143 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 13:25:48.287153 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 13:25:48.287163 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 13:25:48.287173 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 13:25:48.287184 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 13:25:48.287194 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 13:25:48.287204 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 13:25:48.287218 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 13:25:48.287228 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 13:25:48.287240 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 13:25:48.287253 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 13:25:48.287263 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 13:25:48.287274 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 13:25:48.287284 kernel: iommu: Default domain type: Translated Jan 14 13:25:48.287298 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:25:48.287308 kernel: efivars: Registered efivars operations Jan 14 13:25:48.287319 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:25:48.287642 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 13:25:48.287656 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 14 13:25:48.287667 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 14 13:25:48.287677 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 14 13:25:48.287686 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 14 13:25:48.287700 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 14 13:25:48.287710 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 14 13:25:48.287720 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 14 13:25:48.287729 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 14 13:25:48.287950 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 13:25:48.288170 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 13:25:48.288729 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 13:25:48.288749 kernel: vgaarb: loaded Jan 14 13:25:48.288767 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 13:25:48.288778 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 13:25:48.288788 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 13:25:48.288798 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:25:48.288809 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:25:48.288820 kernel: pnp: PnP ACPI init Jan 14 13:25:48.289056 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 14 13:25:48.289077 kernel: pnp: PnP ACPI: found 6 devices Jan 14 13:25:48.289088 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:25:48.289099 kernel: NET: Registered PF_INET protocol family Jan 14 13:25:48.289109 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:25:48.289121 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 13:25:48.289152 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:25:48.289168 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 13:25:48.289179 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 13:25:48.289190 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 13:25:48.289201 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 13:25:48.289215 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 13:25:48.289226 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:25:48.289237 kernel: NET: Registered PF_XDP protocol family Jan 14 13:25:48.289763 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 14 13:25:48.289979 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 14 13:25:48.290180 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 13:25:48.290698 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 13:25:48.290903 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 13:25:48.291106 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 14 13:25:48.291304 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 14 13:25:48.291831 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 14 13:25:48.291850 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:25:48.291864 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 13:25:48.291875 kernel: Initialise system trusted keyrings Jan 14 13:25:48.291886 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 13:25:48.291896 kernel: Key type asymmetric registered Jan 14 13:25:48.291907 kernel: Asymmetric key parser 'x509' registered Jan 14 13:25:48.291922 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 13:25:48.291933 kernel: io scheduler mq-deadline registered Jan 14 13:25:48.291944 kernel: io scheduler kyber registered Jan 14 13:25:48.291954 kernel: io scheduler bfq registered Jan 14 13:25:48.291966 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:25:48.291981 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 13:25:48.291996 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 13:25:48.292007 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 13:25:48.292018 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:25:48.292028 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:25:48.292040 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 13:25:48.292053 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 13:25:48.292064 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 13:25:48.292286 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 14 13:25:48.292304 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 14 13:25:48.292835 kernel: rtc_cmos 00:04: registered as rtc0 Jan 14 13:25:48.293048 kernel: rtc_cmos 00:04: setting system clock to 2026-01-14T13:25:41 UTC (1768397141) Jan 14 13:25:48.293264 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 14 13:25:48.293280 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 14 13:25:48.293291 kernel: efifb: probing for efifb Jan 14 13:25:48.293302 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 14 13:25:48.293313 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 14 13:25:48.293324 kernel: efifb: scrolling: redraw Jan 14 13:25:48.293650 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:25:48.293662 kernel: Console: switching to colour frame buffer device 160x50 Jan 14 13:25:48.293678 kernel: fb0: EFI VGA frame buffer device Jan 14 13:25:48.293689 kernel: pstore: Using crash dump compression: deflate Jan 14 13:25:48.293700 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:25:48.293710 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:25:48.293724 kernel: Segment Routing with IPv6 Jan 14 13:25:48.293736 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:25:48.293747 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:25:48.293761 kernel: Key type dns_resolver registered Jan 14 13:25:48.293772 kernel: IPI shorthand broadcast: enabled Jan 14 13:25:48.293783 kernel: sched_clock: Marking stable (7049104791, 4281730407)->(13671241363, -2340406165) Jan 14 13:25:48.293794 kernel: registered taskstats version 1 Jan 14 13:25:48.293804 kernel: Loading compiled-in X.509 certificates Jan 14 13:25:48.293815 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e8d0aa6f955c6f54d5fb15cad90d0ea8c698688e' Jan 14 13:25:48.293826 kernel: Demotion targets for Node 0: null Jan 14 13:25:48.293840 kernel: Key type .fscrypt registered Jan 14 13:25:48.293854 kernel: Key type fscrypt-provisioning registered Jan 14 13:25:48.293865 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:25:48.293875 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:25:48.293886 kernel: ima: No architecture policies found Jan 14 13:25:48.293897 kernel: clk: Disabling unused clocks Jan 14 13:25:48.293907 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 13:25:48.293922 kernel: Write protecting the kernel read-only data: 47104k Jan 14 13:25:48.293933 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 13:25:48.293944 kernel: Run /init as init process Jan 14 13:25:48.293955 kernel: with arguments: Jan 14 13:25:48.293968 kernel: /init Jan 14 13:25:48.293979 kernel: with environment: Jan 14 13:25:48.293990 kernel: HOME=/ Jan 14 13:25:48.294003 kernel: TERM=linux Jan 14 13:25:48.294014 kernel: SCSI subsystem initialized Jan 14 13:25:48.294024 kernel: libata version 3.00 loaded. Jan 14 13:25:48.294249 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 13:25:48.294265 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 13:25:48.294883 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 13:25:48.295111 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 13:25:48.295671 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 13:25:48.295929 kernel: scsi host0: ahci Jan 14 13:25:48.296165 kernel: scsi host1: ahci Jan 14 13:25:48.296728 kernel: scsi host2: ahci Jan 14 13:25:48.296970 kernel: scsi host3: ahci Jan 14 13:25:48.297211 kernel: scsi host4: ahci Jan 14 13:25:48.297761 kernel: scsi host5: ahci Jan 14 13:25:48.297780 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 14 13:25:48.297791 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 14 13:25:48.297802 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 14 13:25:48.297814 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 14 13:25:48.297826 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 14 13:25:48.297844 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 14 13:25:48.297855 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 13:25:48.297866 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 13:25:48.297876 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 13:25:48.297887 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 14 13:25:48.297898 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 13:25:48.297909 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 13:25:48.297923 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 13:25:48.297934 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 14 13:25:48.297945 kernel: ata3.00: applying bridge limits Jan 14 13:25:48.297960 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 13:25:48.297970 kernel: ata3.00: configured for UDMA/100 Jan 14 13:25:48.298224 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 13:25:48.298788 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 13:25:48.299018 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 14 13:25:48.299262 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 14 13:25:48.299279 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 13:25:48.299290 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:25:48.299301 kernel: GPT:16515071 != 27000831 Jan 14 13:25:48.299316 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 13:25:48.299327 kernel: GPT:16515071 != 27000831 Jan 14 13:25:48.299661 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 13:25:48.299672 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 13:25:48.299914 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 14 13:25:48.299930 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:25:48.299941 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:25:48.299960 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 13:25:48.299971 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 13:25:48.299982 kernel: raid6: avx2x4 gen() 15339 MB/s Jan 14 13:25:48.299993 kernel: raid6: avx2x2 gen() 16716 MB/s Jan 14 13:25:48.300004 kernel: raid6: avx2x1 gen() 14957 MB/s Jan 14 13:25:48.300014 kernel: raid6: using algorithm avx2x2 gen() 16716 MB/s Jan 14 13:25:48.300025 kernel: raid6: .... xor() 15894 MB/s, rmw enabled Jan 14 13:25:48.300036 kernel: raid6: using avx2x2 recovery algorithm Jan 14 13:25:48.300050 kernel: xor: automatically using best checksumming function avx Jan 14 13:25:48.300061 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:25:48.300075 kernel: BTRFS: device fsid a2d7d9b8-1cc4-4aa6-91f7-011fd4658df9 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (180) Jan 14 13:25:48.300089 kernel: BTRFS info (device dm-0): first mount of filesystem a2d7d9b8-1cc4-4aa6-91f7-011fd4658df9 Jan 14 13:25:48.300101 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:25:48.300112 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:25:48.300123 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 13:25:48.300137 kernel: loop: module loaded Jan 14 13:25:48.300147 kernel: loop0: detected capacity change from 0 to 100536 Jan 14 13:25:48.300158 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:25:48.300170 systemd[1]: Successfully made /usr/ read-only. Jan 14 13:25:48.300185 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 13:25:48.300204 systemd[1]: Detected virtualization kvm. Jan 14 13:25:48.300215 systemd[1]: Detected architecture x86-64. Jan 14 13:25:48.300226 systemd[1]: Running in initrd. Jan 14 13:25:48.300237 systemd[1]: No hostname configured, using default hostname. Jan 14 13:25:48.300249 systemd[1]: Hostname set to . Jan 14 13:25:48.300261 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 13:25:48.300272 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:25:48.300289 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:25:48.300301 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:25:48.300315 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:25:48.300327 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:25:48.300663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:25:48.300680 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:25:48.300697 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:25:48.300709 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:25:48.300720 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:25:48.300732 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 13:25:48.300743 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:25:48.300755 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:25:48.300770 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:25:48.300782 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:25:48.300795 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:25:48.300809 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:25:48.300825 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 13:25:48.300838 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:25:48.300850 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 13:25:48.300865 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:25:48.300877 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:25:48.300889 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:25:48.300901 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:25:48.300913 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:25:48.300925 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:25:48.300936 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:25:48.300954 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:25:48.300967 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 13:25:48.300978 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:25:48.300989 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:25:48.301001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:25:48.301017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:25:48.301029 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:25:48.301041 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:25:48.301052 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:25:48.301103 systemd-journald[318]: Collecting audit messages is enabled. Jan 14 13:25:48.301134 kernel: audit: type=1130 audit(1768397148.275:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.301146 systemd-journald[318]: Journal started Jan 14 13:25:48.301172 systemd-journald[318]: Runtime Journal (/run/log/journal/0404da4e41814cc39c840bdd02311d30) is 6M, max 48M, 42M free. Jan 14 13:25:48.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.337287 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:25:48.385806 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:25:48.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.399995 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:25:48.418955 kernel: audit: type=1130 audit(1768397148.387:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.461971 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:25:48.542966 kernel: audit: type=1130 audit(1768397148.468:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.548105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:25:48.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.589322 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:25:48.663931 kernel: audit: type=1130 audit(1768397148.583:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.627823 systemd-tmpfiles[331]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 13:25:48.642782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:25:48.756728 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:25:48.767315 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:25:48.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.813686 kernel: audit: type=1130 audit(1768397148.769:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.817932 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:25:48.888147 kernel: Bridge firewalling registered Jan 14 13:25:48.888179 kernel: audit: type=1130 audit(1768397148.849:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.850648 systemd-modules-load[320]: Inserted module 'br_netfilter' Jan 14 13:25:48.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.887717 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:25:49.021167 kernel: audit: type=1130 audit(1768397148.920:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:49.021202 kernel: audit: type=1130 audit(1768397148.977:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:48.933063 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:25:48.987002 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:25:49.085751 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:25:49.162309 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:25:49.177159 dracut-cmdline[352]: dracut-109 Jan 14 13:25:49.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:49.249056 dracut-cmdline[352]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=166c426371167f765dd2026937f2932948c99d0fb4a3868a9b09e1eb4ef3a9c9 Jan 14 13:25:49.336125 kernel: audit: type=1130 audit(1768397149.195:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:49.336171 kernel: audit: type=1334 audit(1768397149.198:11): prog-id=6 op=LOAD Jan 14 13:25:49.198000 audit: BPF prog-id=6 op=LOAD Jan 14 13:25:49.200717 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:25:49.486268 systemd-resolved[373]: Positive Trust Anchors: Jan 14 13:25:49.486814 systemd-resolved[373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:25:49.486821 systemd-resolved[373]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 13:25:49.486860 systemd-resolved[373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:25:49.528877 systemd-resolved[373]: Defaulting to hostname 'linux'. Jan 14 13:25:49.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:49.532707 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:25:49.640950 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:25:49.917973 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:25:49.966764 kernel: iscsi: registered transport (tcp) Jan 14 13:25:50.026115 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:25:50.026185 kernel: QLogic iSCSI HBA Driver Jan 14 13:25:50.131281 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:25:50.210342 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:25:50.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:50.254316 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:25:50.440027 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:25:50.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:50.474321 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:25:50.505745 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:25:50.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:50.650240 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:25:50.669000 audit: BPF prog-id=7 op=LOAD Jan 14 13:25:50.669000 audit: BPF prog-id=8 op=LOAD Jan 14 13:25:50.671272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:25:50.775262 systemd-udevd[584]: Using default interface naming scheme 'v257'. Jan 14 13:25:50.812867 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:25:50.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:50.835081 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:25:50.941952 dracut-pre-trigger[623]: rd.md=0: removing MD RAID activation Jan 14 13:25:51.084274 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:25:51.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:51.115180 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:25:51.155032 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:25:51.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:51.157000 audit: BPF prog-id=9 op=LOAD Jan 14 13:25:51.159798 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:25:51.312131 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:25:51.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:51.331080 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:25:51.401753 systemd-networkd[728]: lo: Link UP Jan 14 13:25:51.401765 systemd-networkd[728]: lo: Gained carrier Jan 14 13:25:51.409837 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:25:51.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:51.452098 systemd[1]: Reached target network.target - Network. Jan 14 13:25:51.542674 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 13:25:51.622211 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 13:25:51.691221 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:25:51.726227 systemd-networkd[728]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:25:51.760963 kernel: AES CTR mode by8 optimization enabled Jan 14 13:25:51.726351 systemd-networkd[728]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:25:51.736941 systemd-networkd[728]: eth0: Link UP Jan 14 13:25:51.736995 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 13:25:51.737214 systemd-networkd[728]: eth0: Gained carrier Jan 14 13:25:51.737230 systemd-networkd[728]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:25:51.861738 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 13:25:51.951903 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:25:51.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:51.957956 systemd-networkd[728]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 13:25:52.064164 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 14 13:25:51.974757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:25:51.974860 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:25:51.993026 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:25:52.116223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:25:52.156204 disk-uuid[839]: Primary Header is updated. Jan 14 13:25:52.156204 disk-uuid[839]: Secondary Entries is updated. Jan 14 13:25:52.156204 disk-uuid[839]: Secondary Header is updated. Jan 14 13:25:52.230039 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:25:52.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:52.390078 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:25:52.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:52.424695 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:25:52.443904 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:25:52.462074 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:25:52.502057 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:25:52.614053 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:25:52.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:52.856945 systemd-networkd[728]: eth0: Gained IPv6LL Jan 14 13:25:53.245135 disk-uuid[841]: Warning: The kernel is still using the old partition table. Jan 14 13:25:53.245135 disk-uuid[841]: The new table will be used at the next reboot or after you Jan 14 13:25:53.245135 disk-uuid[841]: run partprobe(8) or kpartx(8) Jan 14 13:25:53.245135 disk-uuid[841]: The operation has completed successfully. Jan 14 13:25:53.320032 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:25:53.320777 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:25:53.437302 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 14 13:25:53.437345 kernel: audit: type=1130 audit(1768397153.320:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:53.437365 kernel: audit: type=1131 audit(1768397153.320:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:53.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:53.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:53.323963 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:25:53.586948 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (866) Jan 14 13:25:53.613818 kernel: BTRFS info (device vda6): first mount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:25:53.613898 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:25:53.661847 kernel: BTRFS info (device vda6): turning on async discard Jan 14 13:25:53.661928 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 13:25:53.708328 kernel: BTRFS info (device vda6): last unmount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:25:53.723040 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:25:53.777034 kernel: audit: type=1130 audit(1768397153.740:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:53.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:53.743652 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:25:54.055702 ignition[885]: Ignition 2.24.0 Jan 14 13:25:54.055805 ignition[885]: Stage: fetch-offline Jan 14 13:25:54.055867 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:25:54.055884 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:25:54.056006 ignition[885]: parsed url from cmdline: "" Jan 14 13:25:54.056012 ignition[885]: no config URL provided Jan 14 13:25:54.056019 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:25:54.056033 ignition[885]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:25:54.056086 ignition[885]: op(1): [started] loading QEMU firmware config module Jan 14 13:25:54.056095 ignition[885]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 14 13:25:54.094844 ignition[885]: op(1): [finished] loading QEMU firmware config module Jan 14 13:25:55.562190 ignition[885]: parsing config with SHA512: 6b17fbbf5724a626e6529818ed0a3fafa93a623b328cab5ac6dfffa3312abf98ba8dd9bddbe54953dae48e5eb5e1eb17d40e5c8b9801d2777f8d016cc3becb2a Jan 14 13:25:55.622353 unknown[885]: fetched base config from "system" Jan 14 13:25:55.622761 unknown[885]: fetched user config from "qemu" Jan 14 13:25:55.652052 ignition[885]: fetch-offline: fetch-offline passed Jan 14 13:25:55.652758 ignition[885]: Ignition finished successfully Jan 14 13:25:55.682156 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:25:55.768023 kernel: audit: type=1130 audit(1768397155.700:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:55.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:55.701825 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 14 13:25:55.704146 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:25:55.906865 ignition[896]: Ignition 2.24.0 Jan 14 13:25:55.907751 ignition[896]: Stage: kargs Jan 14 13:25:55.907955 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:25:55.907968 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:25:55.988898 ignition[896]: kargs: kargs passed Jan 14 13:25:55.989106 ignition[896]: Ignition finished successfully Jan 14 13:25:56.020882 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:25:56.073131 kernel: audit: type=1130 audit(1768397156.037:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:56.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:56.041896 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:25:56.230965 ignition[903]: Ignition 2.24.0 Jan 14 13:25:56.231077 ignition[903]: Stage: disks Jan 14 13:25:56.231292 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:25:56.231305 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:25:56.233073 ignition[903]: disks: disks passed Jan 14 13:25:56.233146 ignition[903]: Ignition finished successfully Jan 14 13:25:56.304383 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:25:56.362162 kernel: audit: type=1130 audit(1768397156.319:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:56.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:56.320819 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:25:56.377699 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:25:56.408243 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:25:56.439928 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:25:56.467154 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:25:56.470266 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:25:56.618076 systemd-fsck[912]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 14 13:25:56.633955 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:25:56.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:56.670386 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:25:56.712216 kernel: audit: type=1130 audit(1768397156.662:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:57.200111 kernel: EXT4-fs (vda9): mounted filesystem 00eaf6ed-0a89-4fef-afb6-3b81d372e1c1 r/w with ordered data mode. Quota mode: none. Jan 14 13:25:57.202331 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:25:57.226848 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:25:57.262211 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:25:57.279093 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:25:57.311085 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 13:25:57.311259 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:25:57.311299 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:25:57.465854 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Jan 14 13:25:57.465890 kernel: BTRFS info (device vda6): first mount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:25:57.465906 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:25:57.465921 kernel: BTRFS info (device vda6): turning on async discard Jan 14 13:25:57.465937 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 13:25:57.361304 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:25:57.431292 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:25:57.512824 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:25:58.143074 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:25:58.202936 kernel: audit: type=1130 audit(1768397158.143:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:58.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:58.146266 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:25:58.227848 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:25:58.270820 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:25:58.297852 kernel: BTRFS info (device vda6): last unmount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:25:58.354000 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:25:58.411198 kernel: audit: type=1130 audit(1768397158.353:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:58.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:58.420354 ignition[1018]: INFO : Ignition 2.24.0 Jan 14 13:25:58.420354 ignition[1018]: INFO : Stage: mount Jan 14 13:25:58.420354 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:25:58.420354 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:25:58.476085 ignition[1018]: INFO : mount: mount passed Jan 14 13:25:58.476085 ignition[1018]: INFO : Ignition finished successfully Jan 14 13:25:58.499993 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:25:58.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:58.517796 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:25:58.580246 kernel: audit: type=1130 audit(1768397158.513:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:25:58.613103 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:25:58.705964 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1029) Jan 14 13:25:58.732178 kernel: BTRFS info (device vda6): first mount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:25:58.732249 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:25:58.780061 kernel: BTRFS info (device vda6): turning on async discard Jan 14 13:25:58.780143 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 13:25:58.784778 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:25:58.954040 ignition[1045]: INFO : Ignition 2.24.0 Jan 14 13:25:58.954040 ignition[1045]: INFO : Stage: files Jan 14 13:25:58.977308 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:25:58.994207 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:25:59.017275 ignition[1045]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:25:59.038979 ignition[1045]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:25:59.058012 ignition[1045]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:25:59.075325 ignition[1045]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:25:59.093396 ignition[1045]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:25:59.093396 ignition[1045]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:25:59.080016 unknown[1045]: wrote ssh authorized keys file for user: core Jan 14 13:25:59.134919 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 13:25:59.134919 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 13:25:59.300994 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:25:59.512842 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 13:25:59.541350 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 14 13:26:00.225233 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 13:26:00.922049 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 13:26:00.952790 ignition[1045]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 13:26:00.952790 ignition[1045]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:26:00.952790 ignition[1045]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:26:00.952790 ignition[1045]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 13:26:00.952790 ignition[1045]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 13:26:00.952790 ignition[1045]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 13:26:00.952790 ignition[1045]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 13:26:00.952790 ignition[1045]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 13:26:00.952790 ignition[1045]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 14 13:26:01.185130 kernel: audit: type=1130 audit(1768397161.137:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:01.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:01.185245 ignition[1045]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 13:26:01.185245 ignition[1045]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 13:26:01.185245 ignition[1045]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 14 13:26:01.185245 ignition[1045]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:26:01.185245 ignition[1045]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:26:01.185245 ignition[1045]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:26:01.185245 ignition[1045]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:26:01.185245 ignition[1045]: INFO : files: files passed Jan 14 13:26:01.185245 ignition[1045]: INFO : Ignition finished successfully Jan 14 13:26:01.075774 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:26:01.140841 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:26:01.187370 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:26:01.473340 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:26:01.608414 kernel: audit: type=1130 audit(1768397161.489:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:01.608680 kernel: audit: type=1131 audit(1768397161.489:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:01.608699 kernel: audit: type=1130 audit(1768397161.563:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:01.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:01.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:01.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:01.473921 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:26:01.639022 initrd-setup-root-after-ignition[1077]: grep: /sysroot/oem/oem-release: No such file or directory Jan 14 13:26:01.499658 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:26:01.683946 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:26:01.683946 initrd-setup-root-after-ignition[1079]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:26:01.565233 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:26:01.737274 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:26:01.626262 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:26:01.877322 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:26:01.877938 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:26:01.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:01.928855 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:26:02.000198 kernel: audit: type=1130 audit(1768397161.926:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:02.000248 kernel: audit: type=1131 audit(1768397161.927:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:01.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:02.016734 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:26:02.047188 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:26:02.049968 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:26:02.153951 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:26:02.227441 kernel: audit: type=1130 audit(1768397162.153:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:02.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:02.157225 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:26:02.287109 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:26:02.287725 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:26:02.288235 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:26:02.320129 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:26:02.375946 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:26:02.443097 kernel: audit: type=1131 audit(1768397162.403:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:02.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:02.376260 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:26:02.443925 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:26:02.459992 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:26:02.488318 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:26:02.512898 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:26:02.553065 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:26:02.580140 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 13:26:02.612964 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:26:02.641093 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:26:02.655887 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:26:02.687193 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:26:02.741286 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:26:02.742039 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:26:02.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:02.742278 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:26:02.787904 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:26:02.816284 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:26:02.846379 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:26:02.847337 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:26:02.891364 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:26:02.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:02.891741 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:26:02.938964 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:26:02.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:02.939271 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:26:02.954305 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:26:02.981276 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:26:02.982218 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:26:03.010050 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:26:03.039298 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:26:03.066387 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:26:03.066765 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:26:03.102066 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:26:03.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.102265 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:26:03.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.131956 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 13:26:03.132076 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 13:26:03.142209 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:26:03.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.142391 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:26:03.198874 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:26:03.199125 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:26:03.213122 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:26:03.255888 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:26:03.256171 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:26:03.397159 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:26:03.473315 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 13:26:03.473348 kernel: audit: type=1131 audit(1768397163.410:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.410198 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:26:03.584972 kernel: audit: type=1131 audit(1768397163.488:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.585004 kernel: audit: type=1131 audit(1768397163.536:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.585094 ignition[1103]: INFO : Ignition 2.24.0 Jan 14 13:26:03.585094 ignition[1103]: INFO : Stage: umount Jan 14 13:26:03.585094 ignition[1103]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:26:03.585094 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 13:26:03.585094 ignition[1103]: INFO : umount: umount passed Jan 14 13:26:03.585094 ignition[1103]: INFO : Ignition finished successfully Jan 14 13:26:03.808098 kernel: audit: type=1131 audit(1768397163.595:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.808127 kernel: audit: type=1130 audit(1768397163.644:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.808149 kernel: audit: type=1131 audit(1768397163.644:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.808161 kernel: audit: type=1131 audit(1768397163.758:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.410412 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:26:03.870940 kernel: audit: type=1131 audit(1768397163.828:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.411058 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:26:03.925122 kernel: audit: type=1131 audit(1768397163.874:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.411275 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:26:03.974784 kernel: audit: type=1131 audit(1768397163.938:61): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.488749 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:26:03.488923 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:26:03.573419 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:26:03.586295 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:26:03.602882 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:26:03.603091 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:26:03.646148 systemd[1]: Stopped target network.target - Network. Jan 14 13:26:03.730320 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:26:03.730415 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:26:03.759236 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:26:03.759297 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:26:04.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:03.829811 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:26:03.829884 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:26:03.874989 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:26:03.875061 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:26:03.939342 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:26:03.977195 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:26:04.116817 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:26:04.117077 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:26:04.220969 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:26:04.275000 audit: BPF prog-id=6 op=UNLOAD Jan 14 13:26:04.280887 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:26:04.281217 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:26:04.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.301959 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:26:04.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.302178 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:26:04.343027 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 13:26:04.358142 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:26:04.358216 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:26:04.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.383878 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:26:04.383965 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:26:04.408926 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:26:04.426313 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:26:04.426384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:26:04.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.501148 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:26:04.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.501228 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:26:04.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.519005 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:26:04.519057 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:26:04.544963 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:26:04.639000 audit: BPF prog-id=9 op=UNLOAD Jan 14 13:26:04.644836 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:26:04.646035 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:26:04.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.687995 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:26:04.688170 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:26:04.708346 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:26:04.708395 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:26:04.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.742112 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:26:04.742176 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:26:04.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.782052 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:26:04.782111 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:26:04.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.821170 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:26:04.821225 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:26:04.897146 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:26:04.926264 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 13:26:04.926428 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:26:04.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.956996 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:26:04.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.957054 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:26:05.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:04.991875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:26:04.991943 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:26:05.007917 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:26:05.091390 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:26:05.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:05.134965 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:26:05.135335 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:26:05.149376 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:26:05.176947 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:26:05.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:05.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:05.250209 systemd[1]: Switching root. Jan 14 13:26:05.324335 systemd-journald[318]: Journal stopped Jan 14 13:26:17.947985 systemd-journald[318]: Received SIGTERM from PID 1 (systemd). Jan 14 13:26:17.948070 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:26:17.948100 kernel: SELinux: policy capability open_perms=1 Jan 14 13:26:17.948118 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:26:17.948136 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:26:17.948151 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:26:17.948173 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:26:17.948189 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:26:17.948211 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:26:17.948235 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 13:26:17.948260 systemd[1]: Successfully loaded SELinux policy in 158.006ms. Jan 14 13:26:17.948284 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.321ms. Jan 14 13:26:17.948303 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 13:26:17.948320 systemd[1]: Detected virtualization kvm. Jan 14 13:26:17.948344 systemd[1]: Detected architecture x86-64. Jan 14 13:26:17.948367 systemd[1]: Detected first boot. Jan 14 13:26:17.948387 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 13:26:17.948402 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 14 13:26:17.948418 kernel: audit: type=1334 audit(1768397174.492:83): prog-id=10 op=LOAD Jan 14 13:26:17.948443 kernel: audit: type=1334 audit(1768397174.492:84): prog-id=10 op=UNLOAD Jan 14 13:26:17.948936 kernel: audit: type=1334 audit(1768397174.492:85): prog-id=11 op=LOAD Jan 14 13:26:17.948954 kernel: audit: type=1334 audit(1768397174.492:86): prog-id=11 op=UNLOAD Jan 14 13:26:17.948978 zram_generator::config[1148]: No configuration found. Jan 14 13:26:17.948998 kernel: Guest personality initialized and is inactive Jan 14 13:26:17.949016 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 13:26:17.949033 kernel: Initialized host personality Jan 14 13:26:17.949048 kernel: NET: Registered PF_VSOCK protocol family Jan 14 13:26:17.949063 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:26:17.949079 kernel: audit: type=1334 audit(1768397175.840:87): prog-id=12 op=LOAD Jan 14 13:26:17.949098 kernel: audit: type=1334 audit(1768397175.841:88): prog-id=3 op=UNLOAD Jan 14 13:26:17.949118 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 13:26:17.949147 kernel: audit: type=1334 audit(1768397175.841:89): prog-id=13 op=LOAD Jan 14 13:26:17.949167 kernel: audit: type=1334 audit(1768397175.841:90): prog-id=14 op=LOAD Jan 14 13:26:17.949184 kernel: audit: type=1334 audit(1768397175.841:91): prog-id=4 op=UNLOAD Jan 14 13:26:17.949200 kernel: audit: type=1334 audit(1768397175.841:92): prog-id=5 op=UNLOAD Jan 14 13:26:17.949215 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 13:26:17.949233 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 13:26:17.949258 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:26:17.949279 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:26:17.949299 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:26:17.949316 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:26:17.949332 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:26:17.949348 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:26:17.949365 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:26:17.949385 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:26:17.949415 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:26:17.949434 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:26:17.949805 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:26:17.949826 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:26:17.949843 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:26:17.949859 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:26:17.949875 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 13:26:17.949898 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:26:17.949917 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:26:17.949935 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 13:26:17.949951 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 13:26:17.949967 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 13:26:17.949983 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:26:17.949999 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:26:17.950026 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:26:17.950043 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 13:26:17.950059 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:26:17.950075 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:26:17.950091 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:26:17.950107 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:26:17.950126 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 13:26:17.950145 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 13:26:17.950167 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 13:26:17.950183 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:26:17.950199 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 13:26:17.950216 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 13:26:17.950235 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:26:17.950253 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:26:17.950270 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:26:17.950289 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:26:17.950305 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:26:17.950321 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:26:17.950340 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:26:17.950359 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:26:17.950378 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:26:17.950399 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:26:17.950419 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:26:17.950435 systemd[1]: Reached target machines.target - Containers. Jan 14 13:26:17.950917 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:26:17.950943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:26:17.950963 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:26:17.950980 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:26:17.951002 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:26:17.951021 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:26:17.951037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:26:17.951055 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:26:17.951075 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:26:17.951095 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:26:17.951112 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 13:26:17.951131 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 13:26:17.951148 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 13:26:17.951163 kernel: fuse: init (API version 7.41) Jan 14 13:26:17.951179 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 13:26:17.951201 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 13:26:17.951221 kernel: ACPI: bus type drm_connector registered Jan 14 13:26:17.951242 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:26:17.951259 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:26:17.951275 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:26:17.951317 systemd-journald[1234]: Collecting audit messages is enabled. Jan 14 13:26:17.951360 systemd-journald[1234]: Journal started Jan 14 13:26:17.951388 systemd-journald[1234]: Runtime Journal (/run/log/journal/0404da4e41814cc39c840bdd02311d30) is 6M, max 48M, 42M free. Jan 14 13:26:17.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:17.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:17.793000 audit: BPF prog-id=14 op=UNLOAD Jan 14 13:26:17.793000 audit: BPF prog-id=13 op=UNLOAD Jan 14 13:26:17.799000 audit: BPF prog-id=15 op=LOAD Jan 14 13:26:17.802000 audit: BPF prog-id=16 op=LOAD Jan 14 13:26:17.805000 audit: BPF prog-id=17 op=LOAD Jan 14 13:26:17.943000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 13:26:17.943000 audit[1234]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff5299f090 a2=4000 a3=0 items=0 ppid=1 pid=1234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:26:17.943000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 13:26:15.818857 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:26:15.843035 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 13:26:15.846079 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 13:26:15.847144 systemd[1]: systemd-journald.service: Consumed 4.180s CPU time. Jan 14 13:26:17.981751 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:26:18.039992 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 13:26:18.053746 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:26:18.088757 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:26:18.102005 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:26:18.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.117295 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:26:18.131827 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:26:18.147019 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:26:18.161267 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:26:18.176883 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:26:18.192364 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:26:18.206930 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:26:18.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.226091 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:26:18.243758 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:26:18.244087 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:26:18.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.263013 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:26:18.264271 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:26:18.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.282977 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:26:18.283408 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:26:18.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.301747 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:26:18.302119 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:26:18.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.322044 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:26:18.322704 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:26:18.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.341254 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:26:18.341938 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:26:18.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.361210 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:26:18.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.380972 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:26:18.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.404045 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:26:18.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.426090 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 13:26:18.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.448311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:26:18.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:18.490738 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:26:18.510071 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 13:26:18.532811 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:26:18.563266 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:26:18.581032 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:26:18.581076 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:26:18.599370 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 13:26:18.619191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:26:18.619425 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 13:26:18.623070 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:26:18.641428 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:26:18.659921 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:26:18.663378 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:26:18.681324 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:26:18.686156 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:26:18.705948 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:26:18.711064 systemd-journald[1234]: Time spent on flushing to /var/log/journal/0404da4e41814cc39c840bdd02311d30 is 20.671ms for 1209 entries. Jan 14 13:26:18.711064 systemd-journald[1234]: System Journal (/var/log/journal/0404da4e41814cc39c840bdd02311d30) is 8M, max 163.5M, 155.5M free. Jan 14 13:26:20.548425 systemd-journald[1234]: Received client request to flush runtime journal. Jan 14 13:26:20.550245 kernel: loop1: detected capacity change from 0 to 50784 Jan 14 13:26:18.742893 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:26:18.760928 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:26:18.779348 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:26:20.548250 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:26:20.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:20.577838 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:26:20.583258 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 14 13:26:20.583309 kernel: audit: type=1130 audit(1768397180.572:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:20.637184 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:26:20.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:20.671974 kernel: audit: type=1130 audit(1768397180.634:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:20.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:20.723969 kernel: audit: type=1130 audit(1768397180.685:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:20.724310 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:26:20.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:20.745850 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:26:20.773896 kernel: audit: type=1130 audit(1768397180.739:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:20.788782 kernel: loop2: detected capacity change from 0 to 111560 Jan 14 13:26:20.806133 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 13:26:20.830000 audit: BPF prog-id=18 op=LOAD Jan 14 13:26:20.854261 kernel: audit: type=1334 audit(1768397180.830:130): prog-id=18 op=LOAD Jan 14 13:26:20.854318 kernel: audit: type=1334 audit(1768397180.830:131): prog-id=19 op=LOAD Jan 14 13:26:20.854342 kernel: audit: type=1334 audit(1768397180.830:132): prog-id=20 op=LOAD Jan 14 13:26:20.830000 audit: BPF prog-id=19 op=LOAD Jan 14 13:26:20.830000 audit: BPF prog-id=20 op=LOAD Jan 14 13:26:20.833250 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 13:26:20.887000 audit: BPF prog-id=21 op=LOAD Jan 14 13:26:20.898050 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:26:20.901141 kernel: audit: type=1334 audit(1768397180.887:133): prog-id=21 op=LOAD Jan 14 13:26:20.943074 kernel: loop3: detected capacity change from 0 to 229808 Jan 14 13:26:20.935965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:26:20.958231 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:26:20.965939 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 13:26:20.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:20.998428 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 13:26:21.035781 kernel: audit: type=1130 audit(1768397180.988:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:21.035875 kernel: audit: type=1334 audit(1768397180.995:135): prog-id=22 op=LOAD Jan 14 13:26:20.995000 audit: BPF prog-id=22 op=LOAD Jan 14 13:26:20.996000 audit: BPF prog-id=23 op=LOAD Jan 14 13:26:20.996000 audit: BPF prog-id=24 op=LOAD Jan 14 13:26:21.045000 audit: BPF prog-id=25 op=LOAD Jan 14 13:26:21.046000 audit: BPF prog-id=26 op=LOAD Jan 14 13:26:21.046000 audit: BPF prog-id=27 op=LOAD Jan 14 13:26:21.048381 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:26:21.065052 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 14 13:26:21.065072 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 14 13:26:21.089001 kernel: loop4: detected capacity change from 0 to 50784 Jan 14 13:26:21.089095 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:26:21.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:21.111800 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:26:21.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:21.131000 audit: BPF prog-id=8 op=UNLOAD Jan 14 13:26:21.131000 audit: BPF prog-id=7 op=UNLOAD Jan 14 13:26:21.133000 audit: BPF prog-id=28 op=LOAD Jan 14 13:26:21.133000 audit: BPF prog-id=29 op=LOAD Jan 14 13:26:21.136068 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:26:21.171999 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 13:26:21.219896 kernel: loop6: detected capacity change from 0 to 229808 Jan 14 13:26:21.226238 systemd-udevd[1297]: Using default interface naming scheme 'v257'. Jan 14 13:26:21.227759 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:26:21.228865 systemd-nsresourced[1292]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 13:26:21.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:21.248299 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 13:26:21.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:21.314722 (sd-merge)[1295]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 14 13:26:21.322903 (sd-merge)[1295]: Merged extensions into '/usr'. Jan 14 13:26:21.335025 systemd[1]: Reload requested from client PID 1269 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:26:21.335133 systemd[1]: Reloading... Jan 14 13:26:21.424245 systemd-oomd[1285]: No swap; memory pressure usage will be degraded Jan 14 13:26:21.484121 systemd-resolved[1286]: Positive Trust Anchors: Jan 14 13:26:21.484138 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:26:21.484145 systemd-resolved[1286]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 13:26:21.484184 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:26:21.512378 systemd-resolved[1286]: Defaulting to hostname 'linux'. Jan 14 13:26:21.532991 zram_generator::config[1356]: No configuration found. Jan 14 13:26:21.681254 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 14 13:26:21.696771 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:26:21.719909 kernel: ACPI: button: Power Button [PWRF] Jan 14 13:26:21.752389 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 14 13:26:21.770737 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 13:26:21.788843 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 13:26:22.033061 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 13:26:22.051441 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 13:26:22.052919 systemd[1]: Reloading finished in 716 ms. Jan 14 13:26:22.324977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:26:22.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:22.345906 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 13:26:22.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:22.365290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:26:22.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:22.385408 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:26:22.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:22.694226 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:26:22.789888 kernel: kvm_amd: TSC scaling supported Jan 14 13:26:22.789952 kernel: kvm_amd: Nested Virtualization enabled Jan 14 13:26:22.789980 kernel: kvm_amd: Nested Paging enabled Jan 14 13:26:22.805940 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 14 13:26:22.805978 kernel: kvm_amd: PMU virtualization is disabled Jan 14 13:26:22.827398 systemd[1]: Starting ensure-sysext.service... Jan 14 13:26:22.841318 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:26:22.861000 audit: BPF prog-id=30 op=LOAD Jan 14 13:26:22.872420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:26:22.897335 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:26:22.928401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:26:23.054000 audit: BPF prog-id=31 op=LOAD Jan 14 13:26:23.055000 audit: BPF prog-id=22 op=UNLOAD Jan 14 13:26:23.055000 audit: BPF prog-id=32 op=LOAD Jan 14 13:26:23.055000 audit: BPF prog-id=33 op=LOAD Jan 14 13:26:23.055000 audit: BPF prog-id=23 op=UNLOAD Jan 14 13:26:23.055000 audit: BPF prog-id=24 op=UNLOAD Jan 14 13:26:23.060000 audit: BPF prog-id=34 op=LOAD Jan 14 13:26:23.061000 audit: BPF prog-id=21 op=UNLOAD Jan 14 13:26:23.068000 audit: BPF prog-id=35 op=LOAD Jan 14 13:26:23.068000 audit: BPF prog-id=18 op=UNLOAD Jan 14 13:26:23.068000 audit: BPF prog-id=36 op=LOAD Jan 14 13:26:23.068000 audit: BPF prog-id=37 op=LOAD Jan 14 13:26:23.068000 audit: BPF prog-id=19 op=UNLOAD Jan 14 13:26:23.068000 audit: BPF prog-id=20 op=UNLOAD Jan 14 13:26:23.072000 audit: BPF prog-id=38 op=LOAD Jan 14 13:26:23.072000 audit: BPF prog-id=15 op=UNLOAD Jan 14 13:26:23.072000 audit: BPF prog-id=39 op=LOAD Jan 14 13:26:23.072000 audit: BPF prog-id=40 op=LOAD Jan 14 13:26:23.072000 audit: BPF prog-id=16 op=UNLOAD Jan 14 13:26:23.072000 audit: BPF prog-id=17 op=UNLOAD Jan 14 13:26:23.072000 audit: BPF prog-id=41 op=LOAD Jan 14 13:26:23.072000 audit: BPF prog-id=42 op=LOAD Jan 14 13:26:23.072000 audit: BPF prog-id=28 op=UNLOAD Jan 14 13:26:23.072000 audit: BPF prog-id=29 op=UNLOAD Jan 14 13:26:23.075000 audit: BPF prog-id=43 op=LOAD Jan 14 13:26:23.078000 audit: BPF prog-id=25 op=UNLOAD Jan 14 13:26:23.081000 audit: BPF prog-id=44 op=LOAD Jan 14 13:26:23.082000 audit: BPF prog-id=45 op=LOAD Jan 14 13:26:23.082000 audit: BPF prog-id=26 op=UNLOAD Jan 14 13:26:23.082000 audit: BPF prog-id=27 op=UNLOAD Jan 14 13:26:23.130003 systemd-tmpfiles[1424]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 13:26:23.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:23.130837 systemd-tmpfiles[1424]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 13:26:23.130917 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:26:23.131250 systemd-tmpfiles[1424]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:26:23.133868 systemd-tmpfiles[1424]: ACLs are not supported, ignoring. Jan 14 13:26:23.133938 systemd-tmpfiles[1424]: ACLs are not supported, ignoring. Jan 14 13:26:23.137295 systemd[1]: Reload requested from client PID 1420 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:26:23.137312 systemd[1]: Reloading... Jan 14 13:26:23.226407 systemd-tmpfiles[1424]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:26:23.226988 systemd-tmpfiles[1424]: Skipping /boot Jan 14 13:26:23.314068 systemd-tmpfiles[1424]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:26:23.314200 systemd-tmpfiles[1424]: Skipping /boot Jan 14 13:26:23.430041 zram_generator::config[1465]: No configuration found. Jan 14 13:26:23.466894 kernel: EDAC MC: Ver: 3.0.0 Jan 14 13:26:23.527403 systemd-networkd[1422]: lo: Link UP Jan 14 13:26:23.527974 systemd-networkd[1422]: lo: Gained carrier Jan 14 13:26:23.535225 systemd-networkd[1422]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:26:23.535238 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:26:23.539094 systemd-networkd[1422]: eth0: Link UP Jan 14 13:26:23.543858 systemd-networkd[1422]: eth0: Gained carrier Jan 14 13:26:23.544207 systemd-networkd[1422]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:26:23.571743 systemd-networkd[1422]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 13:26:23.765090 systemd[1]: Reloading finished in 626 ms. Jan 14 13:26:23.798438 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:26:23.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:23.817000 audit: BPF prog-id=46 op=LOAD Jan 14 13:26:23.817000 audit: BPF prog-id=38 op=UNLOAD Jan 14 13:26:23.818000 audit: BPF prog-id=47 op=LOAD Jan 14 13:26:23.818000 audit: BPF prog-id=48 op=LOAD Jan 14 13:26:23.818000 audit: BPF prog-id=39 op=UNLOAD Jan 14 13:26:23.818000 audit: BPF prog-id=40 op=UNLOAD Jan 14 13:26:23.818000 audit: BPF prog-id=49 op=LOAD Jan 14 13:26:23.818000 audit: BPF prog-id=50 op=LOAD Jan 14 13:26:23.818000 audit: BPF prog-id=41 op=UNLOAD Jan 14 13:26:23.818000 audit: BPF prog-id=42 op=UNLOAD Jan 14 13:26:23.819000 audit: BPF prog-id=51 op=LOAD Jan 14 13:26:23.820000 audit: BPF prog-id=43 op=UNLOAD Jan 14 13:26:23.820000 audit: BPF prog-id=52 op=LOAD Jan 14 13:26:23.820000 audit: BPF prog-id=53 op=LOAD Jan 14 13:26:23.820000 audit: BPF prog-id=44 op=UNLOAD Jan 14 13:26:23.820000 audit: BPF prog-id=45 op=UNLOAD Jan 14 13:26:23.821000 audit: BPF prog-id=54 op=LOAD Jan 14 13:26:23.821000 audit: BPF prog-id=30 op=UNLOAD Jan 14 13:26:23.822000 audit: BPF prog-id=55 op=LOAD Jan 14 13:26:23.822000 audit: BPF prog-id=34 op=UNLOAD Jan 14 13:26:23.823000 audit: BPF prog-id=56 op=LOAD Jan 14 13:26:23.824000 audit: BPF prog-id=31 op=UNLOAD Jan 14 13:26:23.824000 audit: BPF prog-id=57 op=LOAD Jan 14 13:26:23.824000 audit: BPF prog-id=58 op=LOAD Jan 14 13:26:23.824000 audit: BPF prog-id=32 op=UNLOAD Jan 14 13:26:23.824000 audit: BPF prog-id=33 op=UNLOAD Jan 14 13:26:23.835000 audit: BPF prog-id=59 op=LOAD Jan 14 13:26:23.835000 audit: BPF prog-id=35 op=UNLOAD Jan 14 13:26:23.835000 audit: BPF prog-id=60 op=LOAD Jan 14 13:26:23.835000 audit: BPF prog-id=61 op=LOAD Jan 14 13:26:23.835000 audit: BPF prog-id=36 op=UNLOAD Jan 14 13:26:23.835000 audit: BPF prog-id=37 op=UNLOAD Jan 14 13:26:23.844372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:26:23.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:23.864147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:26:23.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:23.910012 systemd[1]: Reached target network.target - Network. Jan 14 13:26:23.920354 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:26:23.923806 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:26:23.936262 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:26:23.948205 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:26:23.970057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:26:23.986918 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:26:24.012056 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:26:24.026905 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:26:24.027166 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 13:26:24.029405 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:26:24.043337 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 13:26:24.057168 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:26:24.087048 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 13:26:24.104146 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:26:24.129744 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:26:24.146190 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:26:24.152377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:26:24.154784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:26:24.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:24.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:24.175002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:26:24.193000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 13:26:24.193000 audit[1535]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc650c33e0 a2=420 a3=0 items=0 ppid=1506 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:26:24.193000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 13:26:24.198376 augenrules[1535]: No rules Jan 14 13:26:24.180103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:26:24.199107 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:26:24.207108 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:26:24.221994 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:26:24.222384 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:26:24.239183 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:26:24.259109 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:26:24.295368 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:26:24.296148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:26:24.298904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:26:24.328012 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:26:24.353895 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:26:24.368394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:26:24.369203 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 13:26:24.369298 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 13:26:24.369397 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:26:24.370048 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:26:24.376356 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 13:26:24.398248 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:26:24.419233 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:26:24.421782 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:26:24.440432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:26:24.441792 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:26:24.458205 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:26:24.458841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:26:24.490015 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:26:24.492833 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:26:24.505267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:26:24.517816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:26:24.536996 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:26:24.555924 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:26:24.571903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:26:24.572284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:26:24.572761 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 13:26:24.572860 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 13:26:24.572967 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:26:24.573074 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:26:24.579722 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:26:24.580079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:26:24.606168 augenrules[1555]: /sbin/augenrules: No change Jan 14 13:26:24.625262 systemd[1]: Finished ensure-sysext.service. Jan 14 13:26:24.642231 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:26:24.642990 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:26:24.648000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 13:26:24.648000 audit[1577]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc1a5daab0 a2=420 a3=0 items=0 ppid=1555 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:26:24.648000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 13:26:24.656325 augenrules[1577]: No rules Jan 14 13:26:24.655000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 13:26:24.655000 audit[1577]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc1a5dcf40 a2=420 a3=0 items=0 ppid=1555 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:26:24.655000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 13:26:24.660763 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:26:24.661155 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:26:24.678102 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:26:24.678847 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:26:24.694081 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:26:24.694873 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:26:24.721821 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:26:24.721994 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:26:24.726382 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 13:26:24.875845 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 13:26:24.893187 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:26:25.527832 systemd-timesyncd[1587]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 14 13:26:25.527894 systemd-timesyncd[1587]: Initial clock synchronization to Wed 2026-01-14 13:26:25.527715 UTC. Jan 14 13:26:25.531698 systemd-resolved[1286]: Clock change detected. Flushing caches. Jan 14 13:26:25.989989 ldconfig[1517]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:26:26.003025 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:26:26.020999 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:26:26.074691 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:26:26.091652 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:26:26.107799 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:26:26.125797 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:26:26.144880 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 13:26:26.162011 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:26:26.177951 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:26:26.182638 systemd-networkd[1422]: eth0: Gained IPv6LL Jan 14 13:26:26.196438 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 13:26:26.215746 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 13:26:26.231851 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:26:26.250492 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:26:26.250648 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:26:26.262539 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:26:26.278841 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:26:26.296957 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:26:26.315624 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 13:26:26.335479 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 13:26:26.352672 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 13:26:26.371617 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:26:26.385657 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 13:26:26.403931 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:26:26.421036 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:26:26.435963 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:26:26.448756 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:26:26.460625 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:26:26.471535 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:26:26.471670 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:26:26.475715 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:26:26.489920 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 14 13:26:26.516418 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:26:26.532790 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:26:26.550944 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:26:26.568795 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:26:26.585575 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:26:26.587942 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 13:26:26.590466 jq[1601]: false Jan 14 13:26:26.604733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:26:26.621556 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:26:26.639407 extend-filesystems[1602]: Found /dev/vda6 Jan 14 13:26:26.650427 extend-filesystems[1602]: Found /dev/vda9 Jan 14 13:26:26.664494 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:26:26.676430 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Refreshing passwd entry cache Jan 14 13:26:26.676766 oslogin_cache_refresh[1603]: Refreshing passwd entry cache Jan 14 13:26:26.679883 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 13:26:26.682441 extend-filesystems[1602]: Checking size of /dev/vda9 Jan 14 13:26:26.709628 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Failure getting users, quitting Jan 14 13:26:26.709628 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 13:26:26.709628 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Refreshing group entry cache Jan 14 13:26:26.709710 extend-filesystems[1602]: Resized partition /dev/vda9 Jan 14 13:26:26.708458 oslogin_cache_refresh[1603]: Failure getting users, quitting Jan 14 13:26:26.720733 extend-filesystems[1620]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 13:26:26.708477 oslogin_cache_refresh[1603]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 13:26:26.708526 oslogin_cache_refresh[1603]: Refreshing group entry cache Jan 14 13:26:26.737759 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:26:26.755871 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Failure getting groups, quitting Jan 14 13:26:26.755930 oslogin_cache_refresh[1603]: Failure getting groups, quitting Jan 14 13:26:26.755994 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 13:26:26.756027 oslogin_cache_refresh[1603]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 13:26:26.758661 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:26:26.778716 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:26:26.789882 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:26:26.791015 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:26:26.793569 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:26:26.805522 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:26:26.831636 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:26:26.841814 jq[1637]: true Jan 14 13:26:26.845912 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:26:26.846667 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:26:26.847885 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 13:26:26.849868 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 13:26:26.867957 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:26:26.871027 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:26:26.926689 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 14 13:26:26.927660 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 14 13:26:26.928523 jq[1647]: true Jan 14 13:26:26.943658 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:26:35.095944 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:26:37.568203 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 14 13:26:37.561845 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:26:37.562717 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:26:37.609482 bash[1677]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:26:37.607505 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:26:37.629601 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 13:26:37.630880 tar[1646]: linux-amd64/LICENSE Jan 14 13:26:37.633865 tar[1646]: linux-amd64/helm Jan 14 13:26:37.636605 systemd-logind[1630]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 13:26:37.636633 systemd-logind[1630]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 13:26:37.637902 systemd-logind[1630]: New seat seat0. Jan 14 13:26:37.649722 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:26:37.685681 update_engine[1634]: I20260114 13:26:37.685602 1634 main.cc:92] Flatcar Update Engine starting Jan 14 13:26:37.726894 dbus-daemon[1599]: [system] SELinux support is enabled Jan 14 13:26:37.727592 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:26:37.761781 update_engine[1634]: I20260114 13:26:37.738674 1634 update_check_scheduler.cc:74] Next update check in 11m2s Jan 14 13:26:37.747504 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:26:37.765713 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:26:37.765777 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:26:37.785605 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:26:37.785638 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:26:37.791848 dbus-daemon[1599]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 13:26:37.812916 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 14 13:26:37.816063 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:26:37.839554 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:26:37.870618 extend-filesystems[1620]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 13:26:37.870618 extend-filesystems[1620]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 14 13:26:37.870618 extend-filesystems[1620]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 14 13:26:37.940540 extend-filesystems[1602]: Resized filesystem in /dev/vda9 Jan 14 13:26:37.886269 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:26:37.888601 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:26:38.079799 containerd[1649]: time="2026-01-14T13:26:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 13:26:38.082748 containerd[1649]: time="2026-01-14T13:26:38.082614484Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 13:26:38.085966 locksmithd[1699]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:26:38.102694 containerd[1649]: time="2026-01-14T13:26:38.102567467Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.035µs" Jan 14 13:26:38.102694 containerd[1649]: time="2026-01-14T13:26:38.102691308Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 13:26:38.102796 containerd[1649]: time="2026-01-14T13:26:38.102727897Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 13:26:38.102796 containerd[1649]: time="2026-01-14T13:26:38.102739238Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.102888136Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.102905789Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.102964017Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.102974958Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.103489278Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.103503725Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.103520396Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.103528441Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.103691866Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.103705130Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.103793135Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105252 containerd[1649]: time="2026-01-14T13:26:38.103995553Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105659 containerd[1649]: time="2026-01-14T13:26:38.104023595Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 13:26:38.105659 containerd[1649]: time="2026-01-14T13:26:38.104032612Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 13:26:38.105659 containerd[1649]: time="2026-01-14T13:26:38.104070603Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 13:26:38.106910 containerd[1649]: time="2026-01-14T13:26:38.106796191Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 13:26:38.107265 containerd[1649]: time="2026-01-14T13:26:38.106964394Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120611868Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120653946Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120727594Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120737653Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120748794Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120758521Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120769542Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120778138Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120787947Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120798607Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120807282Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120816820Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120827039Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 13:26:38.123336 containerd[1649]: time="2026-01-14T13:26:38.120836317Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.120969876Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.120998369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121014759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121026281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121035037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121043844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121054343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121062809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121071846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121679520Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121692104Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121711941Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121749090Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.121759780Z" level=info msg="Start snapshots syncer" Jan 14 13:26:38.123802 containerd[1649]: time="2026-01-14T13:26:38.122460729Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 13:26:38.124469 containerd[1649]: time="2026-01-14T13:26:38.122735562Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 13:26:38.124469 containerd[1649]: time="2026-01-14T13:26:38.122774815Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124480649Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124599330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124631000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124643223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124656337Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124669852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124682165Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124691933Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124703805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124717902Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124753268Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124765039Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 13:26:38.124784 containerd[1649]: time="2026-01-14T13:26:38.124776632Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 13:26:38.125460 containerd[1649]: time="2026-01-14T13:26:38.124788353Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 13:26:38.125460 containerd[1649]: time="2026-01-14T13:26:38.124796959Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 13:26:38.125460 containerd[1649]: time="2026-01-14T13:26:38.124809944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 13:26:38.125460 containerd[1649]: time="2026-01-14T13:26:38.124832185Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 13:26:38.125460 containerd[1649]: time="2026-01-14T13:26:38.124856801Z" level=info msg="runtime interface created" Jan 14 13:26:38.125460 containerd[1649]: time="2026-01-14T13:26:38.124867020Z" level=info msg="created NRI interface" Jan 14 13:26:38.125460 containerd[1649]: time="2026-01-14T13:26:38.124881147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 13:26:38.125460 containerd[1649]: time="2026-01-14T13:26:38.124898399Z" level=info msg="Connect containerd service" Jan 14 13:26:38.125460 containerd[1649]: time="2026-01-14T13:26:38.124928796Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:26:38.131246 containerd[1649]: time="2026-01-14T13:26:38.129648295Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:26:38.280331 sshd_keygen[1684]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:26:38.344824 containerd[1649]: time="2026-01-14T13:26:38.344756410Z" level=info msg="Start subscribing containerd event" Jan 14 13:26:38.345800 containerd[1649]: time="2026-01-14T13:26:38.345756211Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:26:38.345835 containerd[1649]: time="2026-01-14T13:26:38.345813809Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:26:38.349737 containerd[1649]: time="2026-01-14T13:26:38.349528538Z" level=info msg="Start recovering state" Jan 14 13:26:38.350471 containerd[1649]: time="2026-01-14T13:26:38.349867921Z" level=info msg="Start event monitor" Jan 14 13:26:38.350471 containerd[1649]: time="2026-01-14T13:26:38.350002242Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:26:38.350471 containerd[1649]: time="2026-01-14T13:26:38.350013683Z" level=info msg="Start streaming server" Jan 14 13:26:38.350471 containerd[1649]: time="2026-01-14T13:26:38.350022219Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 13:26:38.350471 containerd[1649]: time="2026-01-14T13:26:38.350029703Z" level=info msg="runtime interface starting up..." Jan 14 13:26:38.350471 containerd[1649]: time="2026-01-14T13:26:38.350036135Z" level=info msg="starting plugins..." Jan 14 13:26:38.350471 containerd[1649]: time="2026-01-14T13:26:38.350051163Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 13:26:38.351648 containerd[1649]: time="2026-01-14T13:26:38.350743395Z" level=info msg="containerd successfully booted in 0.271794s" Jan 14 13:26:38.350944 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:26:38.368685 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:26:38.390906 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:26:38.410628 tar[1646]: linux-amd64/README.md Jan 14 13:26:38.414490 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:48150.service - OpenSSH per-connection server daemon (10.0.0.1:48150). Jan 14 13:26:38.448495 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:26:38.452293 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:26:38.473473 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:26:38.507272 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 13:26:38.543751 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:26:38.562981 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:26:38.578603 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 13:26:38.591654 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:26:38.668013 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 48150 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:26:38.674037 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:38.692971 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:26:38.709323 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:26:38.733897 systemd-logind[1630]: New session 1 of user core. Jan 14 13:26:38.782812 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:26:38.804515 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:26:38.840883 (systemd)[1749]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:38.852002 systemd-logind[1630]: New session 2 of user core. Jan 14 13:26:39.076673 systemd[1749]: Queued start job for default target default.target. Jan 14 13:26:39.097936 systemd[1749]: Created slice app.slice - User Application Slice. Jan 14 13:26:39.098071 systemd[1749]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 13:26:39.098506 systemd[1749]: Reached target paths.target - Paths. Jan 14 13:26:39.098687 systemd[1749]: Reached target timers.target - Timers. Jan 14 13:26:39.101845 systemd[1749]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:26:39.103620 systemd[1749]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 13:26:39.140808 systemd[1749]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 13:26:39.141511 systemd[1749]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:26:39.141763 systemd[1749]: Reached target sockets.target - Sockets. Jan 14 13:26:39.141923 systemd[1749]: Reached target basic.target - Basic System. Jan 14 13:26:39.141988 systemd[1749]: Reached target default.target - Main User Target. Jan 14 13:26:39.142036 systemd[1749]: Startup finished in 275ms. Jan 14 13:26:39.142548 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:26:39.170962 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:26:39.222588 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:48166.service - OpenSSH per-connection server daemon (10.0.0.1:48166). Jan 14 13:26:39.390921 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 48166 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:26:39.393917 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:39.409667 systemd-logind[1630]: New session 3 of user core. Jan 14 13:26:39.416742 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:26:39.472031 sshd[1767]: Connection closed by 10.0.0.1 port 48166 Jan 14 13:26:39.472723 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Jan 14 13:26:39.483623 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:48166.service: Deactivated successfully. Jan 14 13:26:39.487047 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 13:26:39.490563 systemd-logind[1630]: Session 3 logged out. Waiting for processes to exit. Jan 14 13:26:39.493716 systemd-logind[1630]: Removed session 3. Jan 14 13:26:39.495966 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:48178.service - OpenSSH per-connection server daemon (10.0.0.1:48178). Jan 14 13:26:39.615012 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 48178 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:26:39.618839 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:39.632012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:26:39.653802 systemd-logind[1630]: New session 4 of user core. Jan 14 13:26:39.668949 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:26:39.669501 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:26:39.683901 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:26:39.701049 systemd[1]: Startup finished in 11.193s (kernel) + 19.518s (initrd) + 33.550s (userspace) = 1min 4.261s. Jan 14 13:26:39.742516 sshd[1783]: Connection closed by 10.0.0.1 port 48178 Jan 14 13:26:39.742779 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Jan 14 13:26:39.752641 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:48178.service: Deactivated successfully. Jan 14 13:26:39.757851 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:26:39.762743 systemd-logind[1630]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:26:39.767673 systemd-logind[1630]: Removed session 4. Jan 14 13:26:40.967365 kubelet[1782]: E0114 13:26:40.966955 1782 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:26:40.973753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:26:40.974331 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:26:40.975598 systemd[1]: kubelet.service: Consumed 1.512s CPU time, 267.4M memory peak. Jan 14 13:26:49.760382 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:39906.service - OpenSSH per-connection server daemon (10.0.0.1:39906). Jan 14 13:26:49.885750 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 39906 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:26:49.889636 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:49.903299 systemd-logind[1630]: New session 5 of user core. Jan 14 13:26:49.914705 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:26:49.960778 sshd[1805]: Connection closed by 10.0.0.1 port 39906 Jan 14 13:26:49.961962 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Jan 14 13:26:49.981832 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:39906.service: Deactivated successfully. Jan 14 13:26:49.988622 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 13:26:49.992927 systemd-logind[1630]: Session 5 logged out. Waiting for processes to exit. Jan 14 13:26:50.000707 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:39916.service - OpenSSH per-connection server daemon (10.0.0.1:39916). Jan 14 13:26:50.004564 systemd-logind[1630]: Removed session 5. Jan 14 13:26:50.127996 sshd[1811]: Accepted publickey for core from 10.0.0.1 port 39916 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:26:50.131905 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:50.147936 systemd-logind[1630]: New session 6 of user core. Jan 14 13:26:50.167716 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:26:50.203766 sshd[1815]: Connection closed by 10.0.0.1 port 39916 Jan 14 13:26:50.204682 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jan 14 13:26:50.218666 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:39916.service: Deactivated successfully. Jan 14 13:26:50.223879 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 13:26:50.227826 systemd-logind[1630]: Session 6 logged out. Waiting for processes to exit. Jan 14 13:26:50.234716 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:39918.service - OpenSSH per-connection server daemon (10.0.0.1:39918). Jan 14 13:26:50.236302 systemd-logind[1630]: Removed session 6. Jan 14 13:26:50.359968 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 39918 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:26:50.363682 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:50.379796 systemd-logind[1630]: New session 7 of user core. Jan 14 13:26:50.398823 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:26:50.445397 sshd[1825]: Connection closed by 10.0.0.1 port 39918 Jan 14 13:26:50.445879 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Jan 14 13:26:50.459024 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:39918.service: Deactivated successfully. Jan 14 13:26:50.462717 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:26:50.467062 systemd-logind[1630]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:26:50.470898 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:39920.service - OpenSSH per-connection server daemon (10.0.0.1:39920). Jan 14 13:26:50.472977 systemd-logind[1630]: Removed session 7. Jan 14 13:26:50.595726 sshd[1831]: Accepted publickey for core from 10.0.0.1 port 39920 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:26:50.598821 sshd-session[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:50.614584 systemd-logind[1630]: New session 8 of user core. Jan 14 13:26:50.635934 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:26:50.706780 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:26:50.707926 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:26:50.742619 sudo[1836]: pam_unix(sudo:session): session closed for user root Jan 14 13:26:50.747858 sshd[1835]: Connection closed by 10.0.0.1 port 39920 Jan 14 13:26:50.747988 sshd-session[1831]: pam_unix(sshd:session): session closed for user core Jan 14 13:26:50.759768 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:39920.service: Deactivated successfully. Jan 14 13:26:50.764669 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:26:50.770257 systemd-logind[1630]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:26:50.774800 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:39936.service - OpenSSH per-connection server daemon (10.0.0.1:39936). Jan 14 13:26:50.779021 systemd-logind[1630]: Removed session 8. Jan 14 13:26:50.905848 sshd[1843]: Accepted publickey for core from 10.0.0.1 port 39936 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:26:50.907962 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:50.925685 systemd-logind[1630]: New session 9 of user core. Jan 14 13:26:50.942023 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:26:50.991632 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:26:50.992637 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:26:51.187914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:26:51.191864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:26:59.439601 sudo[1849]: pam_unix(sudo:session): session closed for user root Jan 14 13:26:59.463762 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:26:59.464630 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:26:59.487960 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:26:59.622956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:26:59.638000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 13:26:59.641042 augenrules[1882]: No rules Jan 14 13:26:59.648610 kernel: kauditd_printk_skb: 95 callbacks suppressed Jan 14 13:26:59.648683 kernel: audit: type=1305 audit(1768397219.638:225): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 13:26:59.649369 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:26:59.638000 audit[1882]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed2800b60 a2=420 a3=0 items=0 ppid=1857 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:26:59.723666 kernel: audit: type=1300 audit(1768397219.638:225): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed2800b60 a2=420 a3=0 items=0 ppid=1857 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:26:59.638000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 13:26:59.724639 kernel: audit: type=1327 audit(1768397219.638:225): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 13:26:59.750000 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:26:59.750019 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:26:59.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:59.784569 kernel: audit: type=1130 audit(1768397219.749:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:59.784650 kernel: audit: type=1131 audit(1768397219.749:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:59.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:59.784913 sudo[1848]: pam_unix(sudo:session): session closed for user root Jan 14 13:26:59.788954 sshd[1847]: Connection closed by 10.0.0.1 port 39936 Jan 14 13:26:59.791904 sshd-session[1843]: pam_unix(sshd:session): session closed for user core Jan 14 13:26:59.783000 audit[1848]: USER_END pid=1848 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:26:59.854535 kernel: audit: type=1106 audit(1768397219.783:228): pid=1848 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:26:59.854636 kernel: audit: type=1104 audit(1768397219.784:229): pid=1848 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:26:59.784000 audit[1848]: CRED_DISP pid=1848 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:26:59.792000 audit[1843]: USER_END pid=1843 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:26:59.939777 kernel: audit: type=1106 audit(1768397219.792:230): pid=1843 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:26:59.939876 kernel: audit: type=1104 audit(1768397219.793:231): pid=1843 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:26:59.793000 audit[1843]: CRED_DISP pid=1843 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:26:59.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.26:22-10.0.0.1:39936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:26:59.988669 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:39936.service: Deactivated successfully. Jan 14 13:26:59.995660 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:27:00.002471 systemd-logind[1630]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:27:00.006745 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:41598.service - OpenSSH per-connection server daemon (10.0.0.1:41598). Jan 14 13:27:00.009892 kubelet[1881]: E0114 13:27:00.009638 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:27:00.010721 systemd-logind[1630]: Removed session 9. Jan 14 13:27:00.027000 kernel: audit: type=1131 audit(1768397219.988:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.26:22-10.0.0.1:39936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:00.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.26:22-10.0.0.1:41598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:00.028790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:27:00.029038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:27:00.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:27:00.037977 systemd[1]: kubelet.service: Consumed 570ms CPU time, 109.5M memory peak. Jan 14 13:27:00.131000 audit[1898]: USER_ACCT pid=1898 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:27:00.133555 sshd[1898]: Accepted publickey for core from 10.0.0.1 port 41598 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:27:00.135000 audit[1898]: CRED_ACQ pid=1898 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:27:00.136000 audit[1898]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb5bc48c0 a2=3 a3=0 items=0 ppid=1 pid=1898 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:00.136000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:27:00.138776 sshd-session[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:00.156038 systemd-logind[1630]: New session 10 of user core. Jan 14 13:27:00.165612 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 13:27:00.174000 audit[1898]: USER_START pid=1898 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:27:00.181000 audit[1903]: CRED_ACQ pid=1903 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:27:00.217000 audit[1904]: USER_ACCT pid=1904 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:27:00.218000 audit[1904]: CRED_REFR pid=1904 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:27:00.219000 audit[1904]: USER_START pid=1904 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:27:00.219402 sudo[1904]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:27:00.220331 sudo[1904]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:27:01.137479 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 13:27:01.173018 (dockerd)[1925]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 13:27:01.858622 dockerd[1925]: time="2026-01-14T13:27:01.857870872Z" level=info msg="Starting up" Jan 14 13:27:01.860887 dockerd[1925]: time="2026-01-14T13:27:01.860736302Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 13:27:01.914043 dockerd[1925]: time="2026-01-14T13:27:01.913862300Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 13:27:02.153853 dockerd[1925]: time="2026-01-14T13:27:02.153019315Z" level=info msg="Loading containers: start." Jan 14 13:27:02.194700 kernel: Initializing XFRM netlink socket Jan 14 13:27:02.479000 audit[1978]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.479000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffddc9ed3e0 a2=0 a3=0 items=0 ppid=1925 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.479000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 13:27:02.495000 audit[1980]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.495000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffffb26470 a2=0 a3=0 items=0 ppid=1925 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.495000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 13:27:02.512000 audit[1982]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.512000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb3a16780 a2=0 a3=0 items=0 ppid=1925 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.512000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 13:27:02.530000 audit[1984]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.530000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeace3f280 a2=0 a3=0 items=0 ppid=1925 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.530000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 13:27:02.549000 audit[1986]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.549000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd575bfd80 a2=0 a3=0 items=0 ppid=1925 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.549000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 13:27:02.566000 audit[1988]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.566000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffde6d082e0 a2=0 a3=0 items=0 ppid=1925 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.566000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:27:02.583000 audit[1990]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.583000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdb20dfa60 a2=0 a3=0 items=0 ppid=1925 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.583000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 13:27:02.600000 audit[1992]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.600000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffcea47d80 a2=0 a3=0 items=0 ppid=1925 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.600000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 13:27:02.706000 audit[1995]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.706000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd564ebef0 a2=0 a3=0 items=0 ppid=1925 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.706000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 13:27:02.723000 audit[1997]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.723000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe55fbaf60 a2=0 a3=0 items=0 ppid=1925 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.723000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 13:27:02.738000 audit[1999]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.738000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc6d0e2770 a2=0 a3=0 items=0 ppid=1925 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 13:27:02.752000 audit[2001]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.752000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe8ef10940 a2=0 a3=0 items=0 ppid=1925 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.752000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:27:02.770000 audit[2003]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:02.770000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffeb3e86d80 a2=0 a3=0 items=0 ppid=1925 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:02.770000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 13:27:03.038000 audit[2033]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.038000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcc0ec71d0 a2=0 a3=0 items=0 ppid=1925 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.038000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 13:27:03.052000 audit[2035]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.052000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffeeda77b60 a2=0 a3=0 items=0 ppid=1925 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.052000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 13:27:03.067000 audit[2037]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.067000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8a952bb0 a2=0 a3=0 items=0 ppid=1925 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.067000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 13:27:03.084000 audit[2039]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.084000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2e318b70 a2=0 a3=0 items=0 ppid=1925 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.084000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 13:27:03.101000 audit[2041]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.101000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdf6b576f0 a2=0 a3=0 items=0 ppid=1925 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.101000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 13:27:03.119000 audit[2043]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.119000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcb776b5a0 a2=0 a3=0 items=0 ppid=1925 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.119000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:27:03.137000 audit[2045]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.137000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffff66a63a0 a2=0 a3=0 items=0 ppid=1925 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.137000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 13:27:03.153000 audit[2047]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.153000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdc88986b0 a2=0 a3=0 items=0 ppid=1925 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.153000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 13:27:03.176000 audit[2049]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.176000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc1c8b7110 a2=0 a3=0 items=0 ppid=1925 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 13:27:03.194000 audit[2051]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.194000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe4fb7ebb0 a2=0 a3=0 items=0 ppid=1925 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.194000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 13:27:03.209000 audit[2053]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.209000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffde6ef8500 a2=0 a3=0 items=0 ppid=1925 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.209000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 13:27:03.226000 audit[2055]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.226000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc990fe9b0 a2=0 a3=0 items=0 ppid=1925 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:27:03.242000 audit[2057]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.242000 audit[2057]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcd4e06520 a2=0 a3=0 items=0 ppid=1925 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 13:27:03.287000 audit[2062]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:03.287000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd6f910d40 a2=0 a3=0 items=0 ppid=1925 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.287000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 13:27:03.306000 audit[2064]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:03.306000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd38576170 a2=0 a3=0 items=0 ppid=1925 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.306000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 13:27:03.325000 audit[2066]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:03.325000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd44ca09d0 a2=0 a3=0 items=0 ppid=1925 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.325000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 13:27:03.343000 audit[2068]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.343000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa312eea0 a2=0 a3=0 items=0 ppid=1925 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.343000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 13:27:03.363000 audit[2070]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.363000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcaf7c3940 a2=0 a3=0 items=0 ppid=1925 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 13:27:03.382000 audit[2072]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:03.382000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffa8aa2c50 a2=0 a3=0 items=0 ppid=1925 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.382000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 13:27:03.458000 audit[2077]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:03.458000 audit[2077]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc3f09dce0 a2=0 a3=0 items=0 ppid=1925 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.458000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 13:27:03.474000 audit[2079]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:03.474000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd0b4ed850 a2=0 a3=0 items=0 ppid=1925 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.474000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 13:27:03.543000 audit[2087]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:03.543000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd67959000 a2=0 a3=0 items=0 ppid=1925 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.543000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 13:27:03.601000 audit[2093]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:03.601000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffee77902a0 a2=0 a3=0 items=0 ppid=1925 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.601000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 13:27:03.623000 audit[2095]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:03.623000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe1750fee0 a2=0 a3=0 items=0 ppid=1925 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 13:27:03.640000 audit[2097]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:03.640000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdc77d3320 a2=0 a3=0 items=0 ppid=1925 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.640000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 13:27:03.660000 audit[2099]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:03.660000 audit[2099]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff0ce77e30 a2=0 a3=0 items=0 ppid=1925 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.660000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 13:27:03.678000 audit[2101]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:03.678000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe0f82d030 a2=0 a3=0 items=0 ppid=1925 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:03.678000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 13:27:03.681548 systemd-networkd[1422]: docker0: Link UP Jan 14 13:27:03.697061 dockerd[1925]: time="2026-01-14T13:27:03.696671892Z" level=info msg="Loading containers: done." Jan 14 13:27:03.747937 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1100188266-merged.mount: Deactivated successfully. Jan 14 13:27:03.757572 dockerd[1925]: time="2026-01-14T13:27:03.757288950Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 13:27:03.757572 dockerd[1925]: time="2026-01-14T13:27:03.757487683Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 13:27:03.757816 dockerd[1925]: time="2026-01-14T13:27:03.757606623Z" level=info msg="Initializing buildkit" Jan 14 13:27:03.910641 dockerd[1925]: time="2026-01-14T13:27:03.910054007Z" level=info msg="Completed buildkit initialization" Jan 14 13:27:03.926457 dockerd[1925]: time="2026-01-14T13:27:03.925985640Z" level=info msg="Daemon has completed initialization" Jan 14 13:27:03.926670 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 13:27:03.927946 dockerd[1925]: time="2026-01-14T13:27:03.926276180Z" level=info msg="API listen on /run/docker.sock" Jan 14 13:27:03.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:05.479770 containerd[1649]: time="2026-01-14T13:27:05.478882587Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 13:27:06.425805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167804314.mount: Deactivated successfully. Jan 14 13:27:09.822631 containerd[1649]: time="2026-01-14T13:27:09.822406760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:09.825213 containerd[1649]: time="2026-01-14T13:27:09.825159949Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=29108439" Jan 14 13:27:09.829694 containerd[1649]: time="2026-01-14T13:27:09.829418335Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:09.835477 containerd[1649]: time="2026-01-14T13:27:09.835225301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:09.836722 containerd[1649]: time="2026-01-14T13:27:09.836559188Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 4.357503331s" Jan 14 13:27:09.836722 containerd[1649]: time="2026-01-14T13:27:09.836683166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 14 13:27:09.838512 containerd[1649]: time="2026-01-14T13:27:09.837738096Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 13:27:10.186714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:27:10.189587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:27:10.518865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:27:10.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:10.524049 kernel: kauditd_printk_skb: 133 callbacks suppressed Jan 14 13:27:10.524225 kernel: audit: type=1130 audit(1768397230.518:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:10.558819 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:27:10.656386 kubelet[2211]: E0114 13:27:10.656332 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:27:10.661872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:27:10.662255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:27:10.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:27:10.665380 systemd[1]: kubelet.service: Consumed 358ms CPU time, 110.8M memory peak. Jan 14 13:27:10.681248 kernel: audit: type=1131 audit(1768397230.661:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:27:11.884044 containerd[1649]: time="2026-01-14T13:27:11.883707456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:11.886839 containerd[1649]: time="2026-01-14T13:27:11.886547025Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 14 13:27:11.888720 containerd[1649]: time="2026-01-14T13:27:11.888392493Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:11.894515 containerd[1649]: time="2026-01-14T13:27:11.894304221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:11.895558 containerd[1649]: time="2026-01-14T13:27:11.895351797Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.05757997s" Jan 14 13:27:11.895558 containerd[1649]: time="2026-01-14T13:27:11.895455469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 14 13:27:11.897015 containerd[1649]: time="2026-01-14T13:27:11.896842428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 13:27:13.895758 containerd[1649]: time="2026-01-14T13:27:13.895601211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:13.897718 containerd[1649]: time="2026-01-14T13:27:13.897439546Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 14 13:27:13.899503 containerd[1649]: time="2026-01-14T13:27:13.899356514Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:13.902826 containerd[1649]: time="2026-01-14T13:27:13.902692356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:13.904240 containerd[1649]: time="2026-01-14T13:27:13.903991517Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.007045236s" Jan 14 13:27:13.904405 containerd[1649]: time="2026-01-14T13:27:13.904291847Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 14 13:27:13.905579 containerd[1649]: time="2026-01-14T13:27:13.905529723Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 13:27:15.185494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601308244.mount: Deactivated successfully. Jan 14 13:27:16.233896 containerd[1649]: time="2026-01-14T13:27:16.233632871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:16.236252 containerd[1649]: time="2026-01-14T13:27:16.236221482Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=0" Jan 14 13:27:16.239236 containerd[1649]: time="2026-01-14T13:27:16.238819782Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:16.244255 containerd[1649]: time="2026-01-14T13:27:16.244231743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:16.245777 containerd[1649]: time="2026-01-14T13:27:16.245669013Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.340111203s" Jan 14 13:27:16.245777 containerd[1649]: time="2026-01-14T13:27:16.245698953Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 14 13:27:16.247190 containerd[1649]: time="2026-01-14T13:27:16.247045279Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 13:27:16.773439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3649497612.mount: Deactivated successfully. Jan 14 13:27:18.403355 containerd[1649]: time="2026-01-14T13:27:18.402724312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:18.405716 containerd[1649]: time="2026-01-14T13:27:18.405657850Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=0" Jan 14 13:27:18.407883 containerd[1649]: time="2026-01-14T13:27:18.407521353Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:18.412953 containerd[1649]: time="2026-01-14T13:27:18.412675124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:18.414159 containerd[1649]: time="2026-01-14T13:27:18.413993526Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.166676033s" Jan 14 13:27:18.414417 containerd[1649]: time="2026-01-14T13:27:18.414280536Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 14 13:27:18.415185 containerd[1649]: time="2026-01-14T13:27:18.415027054Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 13:27:18.868574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345100083.mount: Deactivated successfully. Jan 14 13:27:18.884702 containerd[1649]: time="2026-01-14T13:27:18.884418714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:27:18.886320 containerd[1649]: time="2026-01-14T13:27:18.886226474Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 13:27:18.888968 containerd[1649]: time="2026-01-14T13:27:18.888780539Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:27:18.893949 containerd[1649]: time="2026-01-14T13:27:18.893787607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:27:18.895206 containerd[1649]: time="2026-01-14T13:27:18.894532939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 479.477218ms" Jan 14 13:27:18.895206 containerd[1649]: time="2026-01-14T13:27:18.894574061Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 13:27:18.895702 containerd[1649]: time="2026-01-14T13:27:18.895592382Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 13:27:19.401584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1172240405.mount: Deactivated successfully. Jan 14 13:27:20.686602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:27:20.690286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:27:20.919241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:27:20.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:20.943253 kernel: audit: type=1130 audit(1768397240.919:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:20.949719 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:27:21.041872 kubelet[2353]: E0114 13:27:21.041589 2353 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:27:21.045842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:27:21.046037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:27:21.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:27:21.046675 systemd[1]: kubelet.service: Consumed 288ms CPU time, 108.4M memory peak. Jan 14 13:27:21.065349 kernel: audit: type=1131 audit(1768397241.046:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:27:22.222734 containerd[1649]: time="2026-01-14T13:27:22.222689849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:22.224863 containerd[1649]: time="2026-01-14T13:27:22.224654779Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Jan 14 13:27:22.226703 containerd[1649]: time="2026-01-14T13:27:22.226491543Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:22.230914 containerd[1649]: time="2026-01-14T13:27:22.230694463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:22.232188 containerd[1649]: time="2026-01-14T13:27:22.231984396Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.336301292s" Jan 14 13:27:22.232298 containerd[1649]: time="2026-01-14T13:27:22.232069042Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 14 13:27:23.460582 update_engine[1634]: I20260114 13:27:23.460304 1634 update_attempter.cc:509] Updating boot flags... Jan 14 13:27:25.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:25.883700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:27:25.884624 systemd[1]: kubelet.service: Consumed 288ms CPU time, 108.4M memory peak. Jan 14 13:27:25.889439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:27:25.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:25.918265 kernel: audit: type=1130 audit(1768397245.883:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:25.918320 kernel: audit: type=1131 audit(1768397245.883:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:25.952543 systemd[1]: Reload requested from client PID 2414 ('systemctl') (unit session-10.scope)... Jan 14 13:27:25.952652 systemd[1]: Reloading... Jan 14 13:27:26.085227 zram_generator::config[2462]: No configuration found. Jan 14 13:27:26.352071 systemd[1]: Reloading finished in 398 ms. Jan 14 13:27:26.392000 audit: BPF prog-id=66 op=LOAD Jan 14 13:27:26.402433 kernel: audit: type=1334 audit(1768397246.392:290): prog-id=66 op=LOAD Jan 14 13:27:26.402529 kernel: audit: type=1334 audit(1768397246.393:291): prog-id=62 op=UNLOAD Jan 14 13:27:26.393000 audit: BPF prog-id=62 op=UNLOAD Jan 14 13:27:26.394000 audit: BPF prog-id=67 op=LOAD Jan 14 13:27:26.417774 kernel: audit: type=1334 audit(1768397246.394:292): prog-id=67 op=LOAD Jan 14 13:27:26.417845 kernel: audit: type=1334 audit(1768397246.394:293): prog-id=55 op=UNLOAD Jan 14 13:27:26.394000 audit: BPF prog-id=55 op=UNLOAD Jan 14 13:27:26.425507 kernel: audit: type=1334 audit(1768397246.395:294): prog-id=68 op=LOAD Jan 14 13:27:26.395000 audit: BPF prog-id=68 op=LOAD Jan 14 13:27:26.433160 kernel: audit: type=1334 audit(1768397246.395:295): prog-id=56 op=UNLOAD Jan 14 13:27:26.395000 audit: BPF prog-id=56 op=UNLOAD Jan 14 13:27:26.440997 kernel: audit: type=1334 audit(1768397246.395:296): prog-id=69 op=LOAD Jan 14 13:27:26.395000 audit: BPF prog-id=69 op=LOAD Jan 14 13:27:26.448494 kernel: audit: type=1334 audit(1768397246.395:297): prog-id=70 op=LOAD Jan 14 13:27:26.395000 audit: BPF prog-id=70 op=LOAD Jan 14 13:27:26.454256 kernel: audit: type=1334 audit(1768397246.395:298): prog-id=57 op=UNLOAD Jan 14 13:27:26.395000 audit: BPF prog-id=57 op=UNLOAD Jan 14 13:27:26.459802 kernel: audit: type=1334 audit(1768397246.395:299): prog-id=58 op=UNLOAD Jan 14 13:27:26.395000 audit: BPF prog-id=58 op=UNLOAD Jan 14 13:27:26.396000 audit: BPF prog-id=71 op=LOAD Jan 14 13:27:26.396000 audit: BPF prog-id=51 op=UNLOAD Jan 14 13:27:26.396000 audit: BPF prog-id=72 op=LOAD Jan 14 13:27:26.396000 audit: BPF prog-id=73 op=LOAD Jan 14 13:27:26.396000 audit: BPF prog-id=52 op=UNLOAD Jan 14 13:27:26.396000 audit: BPF prog-id=53 op=UNLOAD Jan 14 13:27:26.397000 audit: BPF prog-id=74 op=LOAD Jan 14 13:27:26.397000 audit: BPF prog-id=59 op=UNLOAD Jan 14 13:27:26.398000 audit: BPF prog-id=75 op=LOAD Jan 14 13:27:26.398000 audit: BPF prog-id=76 op=LOAD Jan 14 13:27:26.398000 audit: BPF prog-id=60 op=UNLOAD Jan 14 13:27:26.398000 audit: BPF prog-id=61 op=UNLOAD Jan 14 13:27:26.400000 audit: BPF prog-id=77 op=LOAD Jan 14 13:27:26.401000 audit: BPF prog-id=63 op=UNLOAD Jan 14 13:27:26.401000 audit: BPF prog-id=78 op=LOAD Jan 14 13:27:26.401000 audit: BPF prog-id=79 op=LOAD Jan 14 13:27:26.401000 audit: BPF prog-id=64 op=UNLOAD Jan 14 13:27:26.401000 audit: BPF prog-id=65 op=UNLOAD Jan 14 13:27:26.471000 audit: BPF prog-id=80 op=LOAD Jan 14 13:27:26.471000 audit: BPF prog-id=46 op=UNLOAD Jan 14 13:27:26.471000 audit: BPF prog-id=81 op=LOAD Jan 14 13:27:26.471000 audit: BPF prog-id=82 op=LOAD Jan 14 13:27:26.471000 audit: BPF prog-id=47 op=UNLOAD Jan 14 13:27:26.471000 audit: BPF prog-id=48 op=UNLOAD Jan 14 13:27:26.472000 audit: BPF prog-id=83 op=LOAD Jan 14 13:27:26.472000 audit: BPF prog-id=84 op=LOAD Jan 14 13:27:26.472000 audit: BPF prog-id=49 op=UNLOAD Jan 14 13:27:26.472000 audit: BPF prog-id=50 op=UNLOAD Jan 14 13:27:26.474000 audit: BPF prog-id=85 op=LOAD Jan 14 13:27:26.474000 audit: BPF prog-id=54 op=UNLOAD Jan 14 13:27:26.505848 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 13:27:26.506012 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 13:27:26.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:27:26.506693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:27:26.506751 systemd[1]: kubelet.service: Consumed 203ms CPU time, 98.5M memory peak. Jan 14 13:27:26.509235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:27:26.740568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:27:26.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:26.753515 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:27:26.865370 kubelet[2507]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:27:26.865370 kubelet[2507]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 13:27:26.865370 kubelet[2507]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:27:26.865922 kubelet[2507]: I0114 13:27:26.865306 2507 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:27:27.626578 kubelet[2507]: I0114 13:27:27.626370 2507 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 13:27:27.626578 kubelet[2507]: I0114 13:27:27.626453 2507 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:27:27.626803 kubelet[2507]: I0114 13:27:27.626646 2507 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 13:27:27.666808 kubelet[2507]: E0114 13:27:27.666484 2507 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 13:27:27.669553 kubelet[2507]: I0114 13:27:27.669240 2507 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:27:27.686762 kubelet[2507]: I0114 13:27:27.686505 2507 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 13:27:27.697545 kubelet[2507]: I0114 13:27:27.697360 2507 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:27:27.697996 kubelet[2507]: I0114 13:27:27.697802 2507 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:27:27.698357 kubelet[2507]: I0114 13:27:27.697904 2507 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 13:27:27.698357 kubelet[2507]: I0114 13:27:27.698277 2507 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:27:27.698357 kubelet[2507]: I0114 13:27:27.698286 2507 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 13:27:27.698651 kubelet[2507]: I0114 13:27:27.698404 2507 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:27:27.703518 kubelet[2507]: I0114 13:27:27.703488 2507 kubelet.go:480] "Attempting to sync node with API server" Jan 14 13:27:27.703518 kubelet[2507]: I0114 13:27:27.703515 2507 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:27:27.703595 kubelet[2507]: I0114 13:27:27.703547 2507 kubelet.go:386] "Adding apiserver pod source" Jan 14 13:27:27.703595 kubelet[2507]: I0114 13:27:27.703568 2507 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:27:27.711297 kubelet[2507]: E0114 13:27:27.710384 2507 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 13:27:27.711297 kubelet[2507]: E0114 13:27:27.710564 2507 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 13:27:27.713471 kubelet[2507]: I0114 13:27:27.713454 2507 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 13:27:27.714296 kubelet[2507]: I0114 13:27:27.714002 2507 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 13:27:27.715438 kubelet[2507]: W0114 13:27:27.715249 2507 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:27:27.724015 kubelet[2507]: I0114 13:27:27.723915 2507 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 13:27:27.724689 kubelet[2507]: I0114 13:27:27.724063 2507 server.go:1289] "Started kubelet" Jan 14 13:27:27.732230 kubelet[2507]: I0114 13:27:27.730953 2507 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:27:27.732230 kubelet[2507]: I0114 13:27:27.730975 2507 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:27:27.733008 kubelet[2507]: I0114 13:27:27.732951 2507 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:27:27.733393 kubelet[2507]: I0114 13:27:27.730932 2507 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:27:27.737911 kubelet[2507]: I0114 13:27:27.737820 2507 factory.go:223] Registration of the systemd container factory successfully Jan 14 13:27:27.738466 kubelet[2507]: I0114 13:27:27.737973 2507 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:27:27.738466 kubelet[2507]: I0114 13:27:27.734639 2507 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 13:27:27.738466 kubelet[2507]: I0114 13:27:27.738340 2507 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:27:27.738466 kubelet[2507]: E0114 13:27:27.738449 2507 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 13:27:27.738627 kubelet[2507]: E0114 13:27:27.738526 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Jan 14 13:27:27.740416 kubelet[2507]: I0114 13:27:27.735283 2507 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 13:27:27.740416 kubelet[2507]: I0114 13:27:27.740271 2507 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 13:27:27.740416 kubelet[2507]: E0114 13:27:27.735503 2507 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 13:27:27.742363 kubelet[2507]: I0114 13:27:27.741946 2507 server.go:317] "Adding debug handlers to kubelet server" Jan 14 13:27:27.745213 kubelet[2507]: E0114 13:27:27.743867 2507 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:27:27.745213 kubelet[2507]: E0114 13:27:27.742654 2507 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a9bed6c5e55a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 13:27:27.724017063 +0000 UTC m=+0.961172562,LastTimestamp:2026-01-14 13:27:27.724017063 +0000 UTC m=+0.961172562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 13:27:27.747403 kubelet[2507]: I0114 13:27:27.747298 2507 factory.go:223] Registration of the containerd container factory successfully Jan 14 13:27:27.765000 audit[2528]: NETFILTER_CFG table=mangle:42 family=10 entries=2 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:27.765000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc6321e320 a2=0 a3=0 items=0 ppid=2507 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.765000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 13:27:27.767330 kubelet[2507]: I0114 13:27:27.767262 2507 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 13:27:27.770000 audit[2530]: NETFILTER_CFG table=mangle:43 family=2 entries=2 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:27.770000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff4be8abf0 a2=0 a3=0 items=0 ppid=2507 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 13:27:27.773000 audit[2531]: NETFILTER_CFG table=mangle:44 family=10 entries=1 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:27.773000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc48821a0 a2=0 a3=0 items=0 ppid=2507 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 13:27:27.775000 audit[2533]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:27.775000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9f6e40e0 a2=0 a3=0 items=0 ppid=2507 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.775000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 13:27:27.778532 kubelet[2507]: I0114 13:27:27.778518 2507 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 13:27:27.778608 kubelet[2507]: I0114 13:27:27.778598 2507 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 13:27:27.778831 kubelet[2507]: I0114 13:27:27.778645 2507 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:27:27.778000 audit[2534]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_chain pid=2534 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:27.778000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc35a11e50 a2=0 a3=0 items=0 ppid=2507 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 13:27:27.784047 kubelet[2507]: I0114 13:27:27.783580 2507 policy_none.go:49] "None policy: Start" Jan 14 13:27:27.784047 kubelet[2507]: I0114 13:27:27.783607 2507 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 13:27:27.784047 kubelet[2507]: I0114 13:27:27.783623 2507 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:27:27.786000 audit[2537]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:27.786000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6efb1540 a2=0 a3=0 items=0 ppid=2507 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.786000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 13:27:27.787000 audit[2536]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:27.787000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff87a80ae0 a2=0 a3=0 items=0 ppid=2507 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:27:27.795000 audit[2539]: NETFILTER_CFG table=filter:49 family=2 entries=2 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:27.795000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe53313e30 a2=0 a3=0 items=0 ppid=2507 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:27:27.799844 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 13:27:27.818000 audit[2542]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:27.818000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd67da64e0 a2=0 a3=0 items=0 ppid=2507 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 13:27:27.820433 kubelet[2507]: I0114 13:27:27.819993 2507 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 13:27:27.820433 kubelet[2507]: I0114 13:27:27.820028 2507 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 13:27:27.820433 kubelet[2507]: I0114 13:27:27.820051 2507 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 13:27:27.820433 kubelet[2507]: I0114 13:27:27.820061 2507 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 13:27:27.820433 kubelet[2507]: E0114 13:27:27.820273 2507 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:27:27.820670 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 13:27:27.821218 kubelet[2507]: E0114 13:27:27.821024 2507 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 13:27:27.823000 audit[2544]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:27.823000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8e4b53f0 a2=0 a3=0 items=0 ppid=2507 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 13:27:27.826842 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 13:27:27.830000 audit[2545]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:27.830000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd34c57420 a2=0 a3=0 items=0 ppid=2507 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 13:27:27.835000 audit[2546]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:27.835000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff29ab5720 a2=0 a3=0 items=0 ppid=2507 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:27.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 13:27:27.841367 kubelet[2507]: E0114 13:27:27.841347 2507 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 13:27:27.842043 kubelet[2507]: E0114 13:27:27.841447 2507 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 13:27:27.842915 kubelet[2507]: I0114 13:27:27.842275 2507 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 13:27:27.842915 kubelet[2507]: I0114 13:27:27.842289 2507 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:27:27.842915 kubelet[2507]: I0114 13:27:27.842801 2507 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:27:27.844057 kubelet[2507]: E0114 13:27:27.844027 2507 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 13:27:27.844255 kubelet[2507]: E0114 13:27:27.844061 2507 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 14 13:27:27.942027 kubelet[2507]: E0114 13:27:27.940872 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Jan 14 13:27:27.951278 kubelet[2507]: I0114 13:27:27.950795 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 13:27:27.952529 kubelet[2507]: E0114 13:27:27.952267 2507 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 14 13:27:27.975928 systemd[1]: Created slice kubepods-burstable-pod9543835c0adc08e6f94864c11eaea9c2.slice - libcontainer container kubepods-burstable-pod9543835c0adc08e6f94864c11eaea9c2.slice. Jan 14 13:27:27.979691 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 14 13:27:28.016299 kubelet[2507]: E0114 13:27:28.016062 2507 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:27:28.018062 kubelet[2507]: E0114 13:27:28.017955 2507 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:27:28.020381 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 14 13:27:28.024801 kubelet[2507]: E0114 13:27:28.024067 2507 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:27:28.039808 kubelet[2507]: I0114 13:27:28.039512 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:28.039808 kubelet[2507]: I0114 13:27:28.039628 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9543835c0adc08e6f94864c11eaea9c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9543835c0adc08e6f94864c11eaea9c2\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:28.039808 kubelet[2507]: I0114 13:27:28.039660 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9543835c0adc08e6f94864c11eaea9c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9543835c0adc08e6f94864c11eaea9c2\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:28.039808 kubelet[2507]: I0114 13:27:28.039682 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9543835c0adc08e6f94864c11eaea9c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9543835c0adc08e6f94864c11eaea9c2\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:28.039808 kubelet[2507]: I0114 13:27:28.039702 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:28.039959 kubelet[2507]: I0114 13:27:28.039827 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:28.039959 kubelet[2507]: I0114 13:27:28.039849 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:28.039959 kubelet[2507]: I0114 13:27:28.039868 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:28.141463 kubelet[2507]: I0114 13:27:28.141397 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 14 13:27:28.156150 kubelet[2507]: I0114 13:27:28.155959 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 13:27:28.156600 kubelet[2507]: E0114 13:27:28.156445 2507 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 14 13:27:28.318354 kubelet[2507]: E0114 13:27:28.317724 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:28.318672 kubelet[2507]: E0114 13:27:28.318542 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:28.319556 containerd[1649]: time="2026-01-14T13:27:28.319465914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9543835c0adc08e6f94864c11eaea9c2,Namespace:kube-system,Attempt:0,}" Jan 14 13:27:28.320049 containerd[1649]: time="2026-01-14T13:27:28.319733571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 14 13:27:28.325909 kubelet[2507]: E0114 13:27:28.325660 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:28.326488 containerd[1649]: time="2026-01-14T13:27:28.326069366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 14 13:27:28.342414 kubelet[2507]: E0114 13:27:28.342368 2507 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Jan 14 13:27:28.402194 containerd[1649]: time="2026-01-14T13:27:28.402022482Z" level=info msg="connecting to shim 6cfdd7d5bb36568756129189144acda094e395db618ecd4fea8396cf1d89ab2d" address="unix:///run/containerd/s/5d77bd00927006470bc15dd4fa46dcc4019df641c2e72ded37760c320405aa97" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:27:28.403652 containerd[1649]: time="2026-01-14T13:27:28.403534971Z" level=info msg="connecting to shim 7e892724b5e085cbd1b0997a2863e7375b5e64e535e12d040408799d9347a5a6" address="unix:///run/containerd/s/bc349eb8100ae0d6ca80468bab5c08c41ea116dd50829f65a7742606d3609641" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:27:28.404596 containerd[1649]: time="2026-01-14T13:27:28.404433064Z" level=info msg="connecting to shim 7782c057dfd00f924f31125e9b700ff3152a1cd2f968b25d6eee3fd321e1d705" address="unix:///run/containerd/s/dd053007f608ca64ed844073558951ae8da364b44d4d334ef4a60973e88c794e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:27:28.477842 systemd[1]: Started cri-containerd-6cfdd7d5bb36568756129189144acda094e395db618ecd4fea8396cf1d89ab2d.scope - libcontainer container 6cfdd7d5bb36568756129189144acda094e395db618ecd4fea8396cf1d89ab2d. Jan 14 13:27:28.481056 systemd[1]: Started cri-containerd-7e892724b5e085cbd1b0997a2863e7375b5e64e535e12d040408799d9347a5a6.scope - libcontainer container 7e892724b5e085cbd1b0997a2863e7375b5e64e535e12d040408799d9347a5a6. Jan 14 13:27:28.488394 systemd[1]: Started cri-containerd-7782c057dfd00f924f31125e9b700ff3152a1cd2f968b25d6eee3fd321e1d705.scope - libcontainer container 7782c057dfd00f924f31125e9b700ff3152a1cd2f968b25d6eee3fd321e1d705. Jan 14 13:27:28.515000 audit: BPF prog-id=86 op=LOAD Jan 14 13:27:28.516000 audit: BPF prog-id=87 op=LOAD Jan 14 13:27:28.516000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2569 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663666464376435626233363536383735363132393138393134346163 Jan 14 13:27:28.517000 audit: BPF prog-id=87 op=UNLOAD Jan 14 13:27:28.517000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663666464376435626233363536383735363132393138393134346163 Jan 14 13:27:28.517000 audit: BPF prog-id=88 op=LOAD Jan 14 13:27:28.517000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2569 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663666464376435626233363536383735363132393138393134346163 Jan 14 13:27:28.517000 audit: BPF prog-id=89 op=LOAD Jan 14 13:27:28.517000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2569 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663666464376435626233363536383735363132393138393134346163 Jan 14 13:27:28.517000 audit: BPF prog-id=89 op=UNLOAD Jan 14 13:27:28.517000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663666464376435626233363536383735363132393138393134346163 Jan 14 13:27:28.517000 audit: BPF prog-id=88 op=UNLOAD Jan 14 13:27:28.517000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663666464376435626233363536383735363132393138393134346163 Jan 14 13:27:28.517000 audit: BPF prog-id=90 op=LOAD Jan 14 13:27:28.517000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2569 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663666464376435626233363536383735363132393138393134346163 Jan 14 13:27:28.525000 audit: BPF prog-id=91 op=LOAD Jan 14 13:27:28.526000 audit: BPF prog-id=92 op=LOAD Jan 14 13:27:28.526000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383263303537646664303066393234663331313235653962373030 Jan 14 13:27:28.527000 audit: BPF prog-id=92 op=UNLOAD Jan 14 13:27:28.527000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.527000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383263303537646664303066393234663331313235653962373030 Jan 14 13:27:28.529000 audit: BPF prog-id=93 op=LOAD Jan 14 13:27:28.529000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383263303537646664303066393234663331313235653962373030 Jan 14 13:27:28.531000 audit: BPF prog-id=94 op=LOAD Jan 14 13:27:28.531000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.532000 audit: BPF prog-id=95 op=LOAD Jan 14 13:27:28.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383263303537646664303066393234663331313235653962373030 Jan 14 13:27:28.532000 audit: BPF prog-id=94 op=UNLOAD Jan 14 13:27:28.532000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383263303537646664303066393234663331313235653962373030 Jan 14 13:27:28.532000 audit: BPF prog-id=93 op=UNLOAD Jan 14 13:27:28.535000 audit: BPF prog-id=96 op=LOAD Jan 14 13:27:28.535000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2573 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383932373234623565303835636264316230393937613238363365 Jan 14 13:27:28.535000 audit: BPF prog-id=96 op=UNLOAD Jan 14 13:27:28.535000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.532000 audit[2612]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.532000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383263303537646664303066393234663331313235653962373030 Jan 14 13:27:28.535000 audit: BPF prog-id=97 op=LOAD Jan 14 13:27:28.535000 audit[2612]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2574 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737383263303537646664303066393234663331313235653962373030 Jan 14 13:27:28.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383932373234623565303835636264316230393937613238363365 Jan 14 13:27:28.536000 audit: BPF prog-id=98 op=LOAD Jan 14 13:27:28.536000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2573 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383932373234623565303835636264316230393937613238363365 Jan 14 13:27:28.537000 audit: BPF prog-id=99 op=LOAD Jan 14 13:27:28.537000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2573 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383932373234623565303835636264316230393937613238363365 Jan 14 13:27:28.537000 audit: BPF prog-id=99 op=UNLOAD Jan 14 13:27:28.537000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383932373234623565303835636264316230393937613238363365 Jan 14 13:27:28.538000 audit: BPF prog-id=98 op=UNLOAD Jan 14 13:27:28.538000 audit[2610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383932373234623565303835636264316230393937613238363365 Jan 14 13:27:28.539000 audit: BPF prog-id=100 op=LOAD Jan 14 13:27:28.539000 audit[2610]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2573 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765383932373234623565303835636264316230393937613238363365 Jan 14 13:27:28.564219 kubelet[2507]: I0114 13:27:28.563475 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 13:27:28.564560 kubelet[2507]: E0114 13:27:28.564445 2507 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Jan 14 13:27:28.601435 kubelet[2507]: E0114 13:27:28.600336 2507 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 13:27:28.605440 containerd[1649]: time="2026-01-14T13:27:28.603973971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cfdd7d5bb36568756129189144acda094e395db618ecd4fea8396cf1d89ab2d\"" Jan 14 13:27:28.609906 kubelet[2507]: E0114 13:27:28.608567 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:28.616506 containerd[1649]: time="2026-01-14T13:27:28.616378125Z" level=info msg="CreateContainer within sandbox \"6cfdd7d5bb36568756129189144acda094e395db618ecd4fea8396cf1d89ab2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 13:27:28.624258 containerd[1649]: time="2026-01-14T13:27:28.623839112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e892724b5e085cbd1b0997a2863e7375b5e64e535e12d040408799d9347a5a6\"" Jan 14 13:27:28.624511 kubelet[2507]: E0114 13:27:28.624413 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:28.633345 containerd[1649]: time="2026-01-14T13:27:28.633016170Z" level=info msg="CreateContainer within sandbox \"7e892724b5e085cbd1b0997a2863e7375b5e64e535e12d040408799d9347a5a6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 13:27:28.645667 containerd[1649]: time="2026-01-14T13:27:28.645558100Z" level=info msg="Container 75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:27:28.651206 containerd[1649]: time="2026-01-14T13:27:28.650703106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9543835c0adc08e6f94864c11eaea9c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7782c057dfd00f924f31125e9b700ff3152a1cd2f968b25d6eee3fd321e1d705\"" Jan 14 13:27:28.653729 kubelet[2507]: E0114 13:27:28.653666 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:28.663313 containerd[1649]: time="2026-01-14T13:27:28.663272015Z" level=info msg="CreateContainer within sandbox \"7782c057dfd00f924f31125e9b700ff3152a1cd2f968b25d6eee3fd321e1d705\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 13:27:28.666048 containerd[1649]: time="2026-01-14T13:27:28.665970408Z" level=info msg="Container 0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:27:28.670338 containerd[1649]: time="2026-01-14T13:27:28.669901186Z" level=info msg="CreateContainer within sandbox \"6cfdd7d5bb36568756129189144acda094e395db618ecd4fea8396cf1d89ab2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d\"" Jan 14 13:27:28.671439 containerd[1649]: time="2026-01-14T13:27:28.671320624Z" level=info msg="StartContainer for \"75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d\"" Jan 14 13:27:28.673367 containerd[1649]: time="2026-01-14T13:27:28.673334350Z" level=info msg="connecting to shim 75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d" address="unix:///run/containerd/s/5d77bd00927006470bc15dd4fa46dcc4019df641c2e72ded37760c320405aa97" protocol=ttrpc version=3 Jan 14 13:27:28.681306 containerd[1649]: time="2026-01-14T13:27:28.681054647Z" level=info msg="CreateContainer within sandbox \"7e892724b5e085cbd1b0997a2863e7375b5e64e535e12d040408799d9347a5a6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59\"" Jan 14 13:27:28.682416 containerd[1649]: time="2026-01-14T13:27:28.682380446Z" level=info msg="StartContainer for \"0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59\"" Jan 14 13:27:28.684243 containerd[1649]: time="2026-01-14T13:27:28.684221609Z" level=info msg="connecting to shim 0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59" address="unix:///run/containerd/s/bc349eb8100ae0d6ca80468bab5c08c41ea116dd50829f65a7742606d3609641" protocol=ttrpc version=3 Jan 14 13:27:28.691621 containerd[1649]: time="2026-01-14T13:27:28.691580166Z" level=info msg="Container f14aa9dabf10d24529119379975934fa66e34636878b1eda0a952645c87c773b: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:27:28.702892 containerd[1649]: time="2026-01-14T13:27:28.702754880Z" level=info msg="CreateContainer within sandbox \"7782c057dfd00f924f31125e9b700ff3152a1cd2f968b25d6eee3fd321e1d705\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f14aa9dabf10d24529119379975934fa66e34636878b1eda0a952645c87c773b\"" Jan 14 13:27:28.704282 containerd[1649]: time="2026-01-14T13:27:28.704259233Z" level=info msg="StartContainer for \"f14aa9dabf10d24529119379975934fa66e34636878b1eda0a952645c87c773b\"" Jan 14 13:27:28.707010 containerd[1649]: time="2026-01-14T13:27:28.706693211Z" level=info msg="connecting to shim f14aa9dabf10d24529119379975934fa66e34636878b1eda0a952645c87c773b" address="unix:///run/containerd/s/dd053007f608ca64ed844073558951ae8da364b44d4d334ef4a60973e88c794e" protocol=ttrpc version=3 Jan 14 13:27:28.708868 systemd[1]: Started cri-containerd-75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d.scope - libcontainer container 75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d. Jan 14 13:27:28.749421 systemd[1]: Started cri-containerd-0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59.scope - libcontainer container 0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59. Jan 14 13:27:28.761581 systemd[1]: Started cri-containerd-f14aa9dabf10d24529119379975934fa66e34636878b1eda0a952645c87c773b.scope - libcontainer container f14aa9dabf10d24529119379975934fa66e34636878b1eda0a952645c87c773b. Jan 14 13:27:28.766000 audit: BPF prog-id=101 op=LOAD Jan 14 13:27:28.768000 audit: BPF prog-id=102 op=LOAD Jan 14 13:27:28.768000 audit[2685]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2569 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.768000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643232626363333932393832303334643138333832353463623866 Jan 14 13:27:28.770000 audit: BPF prog-id=102 op=UNLOAD Jan 14 13:27:28.770000 audit[2685]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643232626363333932393832303334643138333832353463623866 Jan 14 13:27:28.771000 audit: BPF prog-id=103 op=LOAD Jan 14 13:27:28.771000 audit[2685]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2569 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643232626363333932393832303334643138333832353463623866 Jan 14 13:27:28.772000 audit: BPF prog-id=104 op=LOAD Jan 14 13:27:28.772000 audit[2685]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2569 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643232626363333932393832303334643138333832353463623866 Jan 14 13:27:28.772000 audit: BPF prog-id=104 op=UNLOAD Jan 14 13:27:28.772000 audit[2685]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643232626363333932393832303334643138333832353463623866 Jan 14 13:27:28.772000 audit: BPF prog-id=103 op=UNLOAD Jan 14 13:27:28.772000 audit[2685]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643232626363333932393832303334643138333832353463623866 Jan 14 13:27:28.773000 audit: BPF prog-id=105 op=LOAD Jan 14 13:27:28.773000 audit[2685]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2569 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735643232626363333932393832303334643138333832353463623866 Jan 14 13:27:28.788000 audit: BPF prog-id=106 op=LOAD Jan 14 13:27:28.790000 audit: BPF prog-id=107 op=LOAD Jan 14 13:27:28.790000 audit[2702]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2574 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631346161396461626631306432343532393131393337393937353933 Jan 14 13:27:28.791000 audit: BPF prog-id=107 op=UNLOAD Jan 14 13:27:28.791000 audit[2702]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631346161396461626631306432343532393131393337393937353933 Jan 14 13:27:28.791000 audit: BPF prog-id=108 op=LOAD Jan 14 13:27:28.791000 audit[2702]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2574 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631346161396461626631306432343532393131393337393937353933 Jan 14 13:27:28.791000 audit: BPF prog-id=109 op=LOAD Jan 14 13:27:28.791000 audit[2702]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2574 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631346161396461626631306432343532393131393337393937353933 Jan 14 13:27:28.791000 audit: BPF prog-id=109 op=UNLOAD Jan 14 13:27:28.791000 audit[2702]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631346161396461626631306432343532393131393337393937353933 Jan 14 13:27:28.791000 audit: BPF prog-id=108 op=UNLOAD Jan 14 13:27:28.791000 audit[2702]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2574 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631346161396461626631306432343532393131393337393937353933 Jan 14 13:27:28.791000 audit: BPF prog-id=110 op=LOAD Jan 14 13:27:28.791000 audit: BPF prog-id=111 op=LOAD Jan 14 13:27:28.791000 audit[2702]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2574 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631346161396461626631306432343532393131393337393937353933 Jan 14 13:27:28.792000 audit: BPF prog-id=112 op=LOAD Jan 14 13:27:28.792000 audit[2694]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2573 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066393836323064633930643033396132333138633437336438303930 Jan 14 13:27:28.792000 audit: BPF prog-id=112 op=UNLOAD Jan 14 13:27:28.792000 audit[2694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066393836323064633930643033396132333138633437336438303930 Jan 14 13:27:28.793000 audit: BPF prog-id=113 op=LOAD Jan 14 13:27:28.793000 audit[2694]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2573 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066393836323064633930643033396132333138633437336438303930 Jan 14 13:27:28.793000 audit: BPF prog-id=114 op=LOAD Jan 14 13:27:28.793000 audit[2694]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2573 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066393836323064633930643033396132333138633437336438303930 Jan 14 13:27:28.795000 audit: BPF prog-id=114 op=UNLOAD Jan 14 13:27:28.795000 audit[2694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066393836323064633930643033396132333138633437336438303930 Jan 14 13:27:28.795000 audit: BPF prog-id=113 op=UNLOAD Jan 14 13:27:28.795000 audit[2694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066393836323064633930643033396132333138633437336438303930 Jan 14 13:27:28.795000 audit: BPF prog-id=115 op=LOAD Jan 14 13:27:28.795000 audit[2694]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2573 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:28.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066393836323064633930643033396132333138633437336438303930 Jan 14 13:27:28.910971 containerd[1649]: time="2026-01-14T13:27:28.908979558Z" level=info msg="StartContainer for \"75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d\" returns successfully" Jan 14 13:27:28.910971 containerd[1649]: time="2026-01-14T13:27:28.909003434Z" level=info msg="StartContainer for \"f14aa9dabf10d24529119379975934fa66e34636878b1eda0a952645c87c773b\" returns successfully" Jan 14 13:27:28.937967 containerd[1649]: time="2026-01-14T13:27:28.937934990Z" level=info msg="StartContainer for \"0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59\" returns successfully" Jan 14 13:27:28.956568 kubelet[2507]: E0114 13:27:28.956536 2507 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 13:27:29.368917 kubelet[2507]: I0114 13:27:29.368791 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 13:27:29.856429 kubelet[2507]: E0114 13:27:29.856050 2507 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:27:29.858750 kubelet[2507]: E0114 13:27:29.857679 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:29.871306 kubelet[2507]: E0114 13:27:29.871282 2507 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:27:29.872828 kubelet[2507]: E0114 13:27:29.872810 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:29.879580 kubelet[2507]: E0114 13:27:29.879555 2507 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 13:27:29.881203 kubelet[2507]: E0114 13:27:29.880505 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:30.380862 kubelet[2507]: E0114 13:27:30.380757 2507 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 14 13:27:30.470408 kubelet[2507]: I0114 13:27:30.470293 2507 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 13:27:30.470408 kubelet[2507]: E0114 13:27:30.470326 2507 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 14 13:27:30.537291 kubelet[2507]: I0114 13:27:30.537019 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:30.654493 kubelet[2507]: E0114 13:27:30.654391 2507 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:30.654493 kubelet[2507]: I0114 13:27:30.654428 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:30.663016 kubelet[2507]: E0114 13:27:30.662883 2507 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:30.663016 kubelet[2507]: I0114 13:27:30.662983 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 13:27:30.673330 kubelet[2507]: E0114 13:27:30.672834 2507 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 13:27:30.707795 kubelet[2507]: I0114 13:27:30.707458 2507 apiserver.go:52] "Watching apiserver" Jan 14 13:27:30.741669 kubelet[2507]: I0114 13:27:30.741628 2507 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 13:27:30.884324 kubelet[2507]: I0114 13:27:30.884014 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:30.886483 kubelet[2507]: I0114 13:27:30.886461 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 13:27:30.886559 kubelet[2507]: I0114 13:27:30.886518 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:30.888862 kubelet[2507]: E0114 13:27:30.888659 2507 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:30.888862 kubelet[2507]: E0114 13:27:30.888794 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:30.889570 kubelet[2507]: E0114 13:27:30.889455 2507 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:30.889570 kubelet[2507]: E0114 13:27:30.889553 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:30.890301 kubelet[2507]: E0114 13:27:30.889897 2507 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 13:27:30.890301 kubelet[2507]: E0114 13:27:30.890252 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:31.887295 kubelet[2507]: I0114 13:27:31.886900 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 13:27:31.887928 kubelet[2507]: I0114 13:27:31.887491 2507 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:31.896176 kubelet[2507]: E0114 13:27:31.895860 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:31.897945 kubelet[2507]: E0114 13:27:31.897747 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:32.722361 systemd[1]: Reload requested from client PID 2795 ('systemctl') (unit session-10.scope)... Jan 14 13:27:32.722436 systemd[1]: Reloading... Jan 14 13:27:32.841319 zram_generator::config[2844]: No configuration found. Jan 14 13:27:32.891306 kubelet[2507]: E0114 13:27:32.890954 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:32.891306 kubelet[2507]: E0114 13:27:32.890961 2507 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:33.118449 systemd[1]: Reloading finished in 395 ms. Jan 14 13:27:33.174619 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:27:33.191979 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:27:33.192664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:27:33.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:33.197375 kernel: kauditd_printk_skb: 200 callbacks suppressed Jan 14 13:27:33.197432 kernel: audit: type=1131 audit(1768397253.191:392): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:33.197497 systemd[1]: kubelet.service: Consumed 1.947s CPU time, 129.6M memory peak. Jan 14 13:27:33.200440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:27:33.200000 audit: BPF prog-id=116 op=LOAD Jan 14 13:27:33.200000 audit: BPF prog-id=85 op=UNLOAD Jan 14 13:27:33.202000 audit: BPF prog-id=117 op=LOAD Jan 14 13:27:33.202000 audit: BPF prog-id=68 op=UNLOAD Jan 14 13:27:33.203000 audit: BPF prog-id=118 op=LOAD Jan 14 13:27:33.203000 audit: BPF prog-id=119 op=LOAD Jan 14 13:27:33.203000 audit: BPF prog-id=69 op=UNLOAD Jan 14 13:27:33.203000 audit: BPF prog-id=70 op=UNLOAD Jan 14 13:27:33.204000 audit: BPF prog-id=120 op=LOAD Jan 14 13:27:33.220578 kernel: audit: type=1334 audit(1768397253.200:393): prog-id=116 op=LOAD Jan 14 13:27:33.220627 kernel: audit: type=1334 audit(1768397253.200:394): prog-id=85 op=UNLOAD Jan 14 13:27:33.220652 kernel: audit: type=1334 audit(1768397253.202:395): prog-id=117 op=LOAD Jan 14 13:27:33.220670 kernel: audit: type=1334 audit(1768397253.202:396): prog-id=68 op=UNLOAD Jan 14 13:27:33.220692 kernel: audit: type=1334 audit(1768397253.203:397): prog-id=118 op=LOAD Jan 14 13:27:33.220710 kernel: audit: type=1334 audit(1768397253.203:398): prog-id=119 op=LOAD Jan 14 13:27:33.220732 kernel: audit: type=1334 audit(1768397253.203:399): prog-id=69 op=UNLOAD Jan 14 13:27:33.220751 kernel: audit: type=1334 audit(1768397253.203:400): prog-id=70 op=UNLOAD Jan 14 13:27:33.220771 kernel: audit: type=1334 audit(1768397253.204:401): prog-id=120 op=LOAD Jan 14 13:27:33.204000 audit: BPF prog-id=80 op=UNLOAD Jan 14 13:27:33.204000 audit: BPF prog-id=121 op=LOAD Jan 14 13:27:33.204000 audit: BPF prog-id=122 op=LOAD Jan 14 13:27:33.204000 audit: BPF prog-id=81 op=UNLOAD Jan 14 13:27:33.204000 audit: BPF prog-id=82 op=UNLOAD Jan 14 13:27:33.206000 audit: BPF prog-id=123 op=LOAD Jan 14 13:27:33.206000 audit: BPF prog-id=74 op=UNLOAD Jan 14 13:27:33.206000 audit: BPF prog-id=124 op=LOAD Jan 14 13:27:33.206000 audit: BPF prog-id=125 op=LOAD Jan 14 13:27:33.206000 audit: BPF prog-id=75 op=UNLOAD Jan 14 13:27:33.206000 audit: BPF prog-id=76 op=UNLOAD Jan 14 13:27:33.207000 audit: BPF prog-id=126 op=LOAD Jan 14 13:27:33.207000 audit: BPF prog-id=127 op=LOAD Jan 14 13:27:33.207000 audit: BPF prog-id=83 op=UNLOAD Jan 14 13:27:33.207000 audit: BPF prog-id=84 op=UNLOAD Jan 14 13:27:33.207000 audit: BPF prog-id=128 op=LOAD Jan 14 13:27:33.207000 audit: BPF prog-id=77 op=UNLOAD Jan 14 13:27:33.207000 audit: BPF prog-id=129 op=LOAD Jan 14 13:27:33.207000 audit: BPF prog-id=130 op=LOAD Jan 14 13:27:33.207000 audit: BPF prog-id=78 op=UNLOAD Jan 14 13:27:33.207000 audit: BPF prog-id=79 op=UNLOAD Jan 14 13:27:33.207000 audit: BPF prog-id=131 op=LOAD Jan 14 13:27:33.207000 audit: BPF prog-id=71 op=UNLOAD Jan 14 13:27:33.207000 audit: BPF prog-id=132 op=LOAD Jan 14 13:27:33.207000 audit: BPF prog-id=133 op=LOAD Jan 14 13:27:33.207000 audit: BPF prog-id=72 op=UNLOAD Jan 14 13:27:33.207000 audit: BPF prog-id=73 op=UNLOAD Jan 14 13:27:33.212000 audit: BPF prog-id=134 op=LOAD Jan 14 13:27:33.212000 audit: BPF prog-id=66 op=UNLOAD Jan 14 13:27:33.214000 audit: BPF prog-id=135 op=LOAD Jan 14 13:27:33.214000 audit: BPF prog-id=67 op=UNLOAD Jan 14 13:27:33.523783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:27:33.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:33.537838 (kubelet)[2886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:27:33.693503 kubelet[2886]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:27:33.693503 kubelet[2886]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 13:27:33.693503 kubelet[2886]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:27:33.693503 kubelet[2886]: I0114 13:27:33.693465 2886 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:27:33.710028 kubelet[2886]: I0114 13:27:33.709685 2886 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 13:27:33.710028 kubelet[2886]: I0114 13:27:33.709704 2886 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:27:33.710028 kubelet[2886]: I0114 13:27:33.709848 2886 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 13:27:33.712361 kubelet[2886]: I0114 13:27:33.711362 2886 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 13:27:33.715723 kubelet[2886]: I0114 13:27:33.715603 2886 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:27:33.724834 kubelet[2886]: I0114 13:27:33.724766 2886 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 13:27:33.737839 kubelet[2886]: I0114 13:27:33.736882 2886 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:27:33.737839 kubelet[2886]: I0114 13:27:33.737325 2886 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:27:33.737839 kubelet[2886]: I0114 13:27:33.737341 2886 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 13:27:33.737839 kubelet[2886]: I0114 13:27:33.737454 2886 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:27:33.738284 kubelet[2886]: I0114 13:27:33.737461 2886 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 13:27:33.738682 kubelet[2886]: I0114 13:27:33.738603 2886 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:27:33.739279 kubelet[2886]: I0114 13:27:33.739066 2886 kubelet.go:480] "Attempting to sync node with API server" Jan 14 13:27:33.740250 kubelet[2886]: I0114 13:27:33.739778 2886 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:27:33.740250 kubelet[2886]: I0114 13:27:33.739806 2886 kubelet.go:386] "Adding apiserver pod source" Jan 14 13:27:33.740250 kubelet[2886]: I0114 13:27:33.739820 2886 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:27:33.742550 kubelet[2886]: I0114 13:27:33.742514 2886 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 13:27:33.744060 kubelet[2886]: I0114 13:27:33.743429 2886 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 13:27:33.753892 kubelet[2886]: I0114 13:27:33.753777 2886 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 13:27:33.753892 kubelet[2886]: I0114 13:27:33.753816 2886 server.go:1289] "Started kubelet" Jan 14 13:27:33.755757 kubelet[2886]: I0114 13:27:33.755683 2886 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:27:33.756436 kubelet[2886]: I0114 13:27:33.756047 2886 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:27:33.756685 kubelet[2886]: I0114 13:27:33.756570 2886 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:27:33.767802 kubelet[2886]: I0114 13:27:33.767785 2886 server.go:317] "Adding debug handlers to kubelet server" Jan 14 13:27:33.772328 kubelet[2886]: I0114 13:27:33.772308 2886 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:27:33.774371 kubelet[2886]: I0114 13:27:33.773941 2886 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 13:27:33.774371 kubelet[2886]: I0114 13:27:33.773961 2886 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 13:27:33.775321 kubelet[2886]: I0114 13:27:33.773967 2886 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 13:27:33.775798 kubelet[2886]: I0114 13:27:33.775642 2886 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:27:33.782775 kubelet[2886]: E0114 13:27:33.782425 2886 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:27:33.789584 kubelet[2886]: I0114 13:27:33.789335 2886 factory.go:223] Registration of the systemd container factory successfully Jan 14 13:27:33.792583 kubelet[2886]: I0114 13:27:33.792547 2886 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:27:33.796475 kubelet[2886]: I0114 13:27:33.796330 2886 factory.go:223] Registration of the containerd container factory successfully Jan 14 13:27:33.814843 kubelet[2886]: I0114 13:27:33.814670 2886 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 13:27:33.870767 kubelet[2886]: I0114 13:27:33.870588 2886 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 13:27:33.870767 kubelet[2886]: I0114 13:27:33.870612 2886 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 13:27:33.870767 kubelet[2886]: I0114 13:27:33.870629 2886 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 13:27:33.870767 kubelet[2886]: I0114 13:27:33.870635 2886 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 13:27:33.870767 kubelet[2886]: E0114 13:27:33.870679 2886 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:27:33.917302 kubelet[2886]: I0114 13:27:33.916990 2886 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 13:27:33.917302 kubelet[2886]: I0114 13:27:33.917009 2886 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 13:27:33.917302 kubelet[2886]: I0114 13:27:33.917029 2886 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:27:33.918680 kubelet[2886]: I0114 13:27:33.918394 2886 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 13:27:33.918680 kubelet[2886]: I0114 13:27:33.918409 2886 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 13:27:33.918680 kubelet[2886]: I0114 13:27:33.918428 2886 policy_none.go:49] "None policy: Start" Jan 14 13:27:33.918680 kubelet[2886]: I0114 13:27:33.918441 2886 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 13:27:33.918680 kubelet[2886]: I0114 13:27:33.918455 2886 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:27:33.918680 kubelet[2886]: I0114 13:27:33.918574 2886 state_mem.go:75] "Updated machine memory state" Jan 14 13:27:33.930402 kubelet[2886]: E0114 13:27:33.929911 2886 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 13:27:33.930402 kubelet[2886]: I0114 13:27:33.930315 2886 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 13:27:33.930402 kubelet[2886]: I0114 13:27:33.930329 2886 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:27:33.934016 kubelet[2886]: I0114 13:27:33.933646 2886 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:27:33.938674 kubelet[2886]: E0114 13:27:33.938594 2886 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 13:27:33.974930 kubelet[2886]: I0114 13:27:33.974061 2886 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:33.974930 kubelet[2886]: I0114 13:27:33.974426 2886 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:33.979055 kubelet[2886]: I0114 13:27:33.979039 2886 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 13:27:33.980851 kubelet[2886]: I0114 13:27:33.980710 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 14 13:27:33.983009 kubelet[2886]: I0114 13:27:33.982675 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9543835c0adc08e6f94864c11eaea9c2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9543835c0adc08e6f94864c11eaea9c2\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:33.983009 kubelet[2886]: I0114 13:27:33.982702 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9543835c0adc08e6f94864c11eaea9c2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9543835c0adc08e6f94864c11eaea9c2\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:33.983009 kubelet[2886]: I0114 13:27:33.982721 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9543835c0adc08e6f94864c11eaea9c2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9543835c0adc08e6f94864c11eaea9c2\") " pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:33.983009 kubelet[2886]: I0114 13:27:33.982737 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:33.983009 kubelet[2886]: I0114 13:27:33.982749 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:33.983296 kubelet[2886]: I0114 13:27:33.982762 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:33.983296 kubelet[2886]: I0114 13:27:33.982773 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:33.983296 kubelet[2886]: I0114 13:27:33.982786 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 13:27:33.991010 kubelet[2886]: E0114 13:27:33.990924 2886 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:33.991455 kubelet[2886]: E0114 13:27:33.991365 2886 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 14 13:27:34.044837 kubelet[2886]: I0114 13:27:34.044547 2886 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 13:27:34.061824 kubelet[2886]: I0114 13:27:34.061713 2886 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 14 13:27:34.061824 kubelet[2886]: I0114 13:27:34.061764 2886 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 13:27:34.292440 kubelet[2886]: E0114 13:27:34.292409 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:34.293810 kubelet[2886]: E0114 13:27:34.292784 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:34.294659 kubelet[2886]: E0114 13:27:34.293395 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:34.741059 kubelet[2886]: I0114 13:27:34.740674 2886 apiserver.go:52] "Watching apiserver" Jan 14 13:27:34.777071 kubelet[2886]: I0114 13:27:34.776474 2886 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 13:27:34.916690 kubelet[2886]: E0114 13:27:34.916469 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:34.916893 kubelet[2886]: E0114 13:27:34.916819 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:34.918386 kubelet[2886]: I0114 13:27:34.917013 2886 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:34.928692 kubelet[2886]: E0114 13:27:34.928443 2886 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 14 13:27:34.928692 kubelet[2886]: E0114 13:27:34.928534 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:34.956050 kubelet[2886]: I0114 13:27:34.955897 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.955884977 podStartE2EDuration="3.955884977s" podCreationTimestamp="2026-01-14 13:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:27:34.955667421 +0000 UTC m=+1.393288828" watchObservedRunningTime="2026-01-14 13:27:34.955884977 +0000 UTC m=+1.393506384" Jan 14 13:27:34.983893 kubelet[2886]: I0114 13:27:34.983719 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.983709603 podStartE2EDuration="3.983709603s" podCreationTimestamp="2026-01-14 13:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:27:34.967924749 +0000 UTC m=+1.405546157" watchObservedRunningTime="2026-01-14 13:27:34.983709603 +0000 UTC m=+1.421331010" Jan 14 13:27:34.983893 kubelet[2886]: I0114 13:27:34.983852 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.983845503 podStartE2EDuration="1.983845503s" podCreationTimestamp="2026-01-14 13:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:27:34.982620198 +0000 UTC m=+1.420241604" watchObservedRunningTime="2026-01-14 13:27:34.983845503 +0000 UTC m=+1.421466910" Jan 14 13:27:35.920412 kubelet[2886]: E0114 13:27:35.919773 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:35.920412 kubelet[2886]: E0114 13:27:35.920538 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:36.921883 kubelet[2886]: E0114 13:27:36.921624 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:38.556751 kubelet[2886]: E0114 13:27:38.556643 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:38.669060 kubelet[2886]: I0114 13:27:38.668919 2886 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 13:27:38.669937 containerd[1649]: time="2026-01-14T13:27:38.669801534Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:27:38.670663 kubelet[2886]: I0114 13:27:38.670450 2886 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 13:27:38.927238 kubelet[2886]: E0114 13:27:38.926629 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:39.143564 systemd[1]: Created slice kubepods-besteffort-poda395a0d0_6c42_4add_8276_a783f5b97f4f.slice - libcontainer container kubepods-besteffort-poda395a0d0_6c42_4add_8276_a783f5b97f4f.slice. Jan 14 13:27:39.147386 kubelet[2886]: W0114 13:27:39.147210 2886 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda395a0d0_6c42_4add_8276_a783f5b97f4f.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda395a0d0_6c42_4add_8276_a783f5b97f4f.slice/cpuset.cpus.effective: no such device Jan 14 13:27:39.234752 kubelet[2886]: I0114 13:27:39.234570 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a395a0d0-6c42-4add-8276-a783f5b97f4f-lib-modules\") pod \"kube-proxy-cfmtr\" (UID: \"a395a0d0-6c42-4add-8276-a783f5b97f4f\") " pod="kube-system/kube-proxy-cfmtr" Jan 14 13:27:39.234752 kubelet[2886]: I0114 13:27:39.234701 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a395a0d0-6c42-4add-8276-a783f5b97f4f-kube-proxy\") pod \"kube-proxy-cfmtr\" (UID: \"a395a0d0-6c42-4add-8276-a783f5b97f4f\") " pod="kube-system/kube-proxy-cfmtr" Jan 14 13:27:39.234752 kubelet[2886]: I0114 13:27:39.234726 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a395a0d0-6c42-4add-8276-a783f5b97f4f-xtables-lock\") pod \"kube-proxy-cfmtr\" (UID: \"a395a0d0-6c42-4add-8276-a783f5b97f4f\") " pod="kube-system/kube-proxy-cfmtr" Jan 14 13:27:39.234752 kubelet[2886]: I0114 13:27:39.234748 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz4cq\" (UniqueName: \"kubernetes.io/projected/a395a0d0-6c42-4add-8276-a783f5b97f4f-kube-api-access-mz4cq\") pod \"kube-proxy-cfmtr\" (UID: \"a395a0d0-6c42-4add-8276-a783f5b97f4f\") " pod="kube-system/kube-proxy-cfmtr" Jan 14 13:27:39.353804 kubelet[2886]: E0114 13:27:39.353597 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:39.454211 kubelet[2886]: E0114 13:27:39.453798 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:39.455028 containerd[1649]: time="2026-01-14T13:27:39.454909971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cfmtr,Uid:a395a0d0-6c42-4add-8276-a783f5b97f4f,Namespace:kube-system,Attempt:0,}" Jan 14 13:27:39.561514 containerd[1649]: time="2026-01-14T13:27:39.560887368Z" level=info msg="connecting to shim 10fae6013c886183dce05746de449ca21cc5b826f20d0822ae2fdbe9ee26e249" address="unix:///run/containerd/s/4bdc8b1620274258a014deb0b9e2aea8459459022ecece8a1c4e853190a2a5bb" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:27:39.706553 systemd[1]: Started cri-containerd-10fae6013c886183dce05746de449ca21cc5b826f20d0822ae2fdbe9ee26e249.scope - libcontainer container 10fae6013c886183dce05746de449ca21cc5b826f20d0822ae2fdbe9ee26e249. Jan 14 13:27:39.739266 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 13:27:39.739427 kernel: audit: type=1334 audit(1768397259.730:434): prog-id=136 op=LOAD Jan 14 13:27:39.730000 audit: BPF prog-id=136 op=LOAD Jan 14 13:27:39.732000 audit: BPF prog-id=137 op=LOAD Jan 14 13:27:39.750499 kernel: audit: type=1334 audit(1768397259.732:435): prog-id=137 op=LOAD Jan 14 13:27:39.732000 audit[2964]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:39.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130666165363031336338383631383364636530353734366465343439 Jan 14 13:27:39.798488 kernel: audit: type=1300 audit(1768397259.732:435): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:39.798538 kernel: audit: type=1327 audit(1768397259.732:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130666165363031336338383631383364636530353734366465343439 Jan 14 13:27:39.805797 kernel: audit: type=1334 audit(1768397259.732:436): prog-id=137 op=UNLOAD Jan 14 13:27:39.732000 audit: BPF prog-id=137 op=UNLOAD Jan 14 13:27:39.732000 audit[2964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:39.836506 kernel: audit: type=1300 audit(1768397259.732:436): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:39.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130666165363031336338383631383364636530353734366465343439 Jan 14 13:27:39.866199 containerd[1649]: time="2026-01-14T13:27:39.865650038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cfmtr,Uid:a395a0d0-6c42-4add-8276-a783f5b97f4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"10fae6013c886183dce05746de449ca21cc5b826f20d0822ae2fdbe9ee26e249\"" Jan 14 13:27:39.867505 kubelet[2886]: E0114 13:27:39.867335 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:39.876865 kernel: audit: type=1327 audit(1768397259.732:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130666165363031336338383631383364636530353734366465343439 Jan 14 13:27:39.876926 kernel: audit: type=1334 audit(1768397259.733:437): prog-id=138 op=LOAD Jan 14 13:27:39.733000 audit: BPF prog-id=138 op=LOAD Jan 14 13:27:39.733000 audit[2964]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:39.884729 containerd[1649]: time="2026-01-14T13:27:39.884557657Z" level=info msg="CreateContainer within sandbox \"10fae6013c886183dce05746de449ca21cc5b826f20d0822ae2fdbe9ee26e249\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:27:39.904986 kernel: audit: type=1300 audit(1768397259.733:437): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:39.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130666165363031336338383631383364636530353734366465343439 Jan 14 13:27:39.930435 kernel: audit: type=1327 audit(1768397259.733:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130666165363031336338383631383364636530353734366465343439 Jan 14 13:27:39.733000 audit: BPF prog-id=139 op=LOAD Jan 14 13:27:39.733000 audit[2964]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:39.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130666165363031336338383631383364636530353734366465343439 Jan 14 13:27:39.733000 audit: BPF prog-id=139 op=UNLOAD Jan 14 13:27:39.733000 audit[2964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:39.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130666165363031336338383631383364636530353734366465343439 Jan 14 13:27:39.733000 audit: BPF prog-id=138 op=UNLOAD Jan 14 13:27:39.733000 audit[2964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:39.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130666165363031336338383631383364636530353734366465343439 Jan 14 13:27:39.733000 audit: BPF prog-id=140 op=LOAD Jan 14 13:27:39.733000 audit[2964]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2953 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:39.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130666165363031336338383631383364636530353734366465343439 Jan 14 13:27:39.936257 containerd[1649]: time="2026-01-14T13:27:39.936221870Z" level=info msg="Container bd11c0b65544ebe4ed154bc196ca87ffade45c891f533a36859a5eae325d3308: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:27:39.942027 kubelet[2886]: E0114 13:27:39.941983 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:39.960815 containerd[1649]: time="2026-01-14T13:27:39.960643490Z" level=info msg="CreateContainer within sandbox \"10fae6013c886183dce05746de449ca21cc5b826f20d0822ae2fdbe9ee26e249\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd11c0b65544ebe4ed154bc196ca87ffade45c891f533a36859a5eae325d3308\"" Jan 14 13:27:39.961278 systemd[1]: Created slice kubepods-besteffort-pod1aba6152_e602_4d72_bf2a_fc2ab296ef2c.slice - libcontainer container kubepods-besteffort-pod1aba6152_e602_4d72_bf2a_fc2ab296ef2c.slice. Jan 14 13:27:39.963654 containerd[1649]: time="2026-01-14T13:27:39.963449640Z" level=info msg="StartContainer for \"bd11c0b65544ebe4ed154bc196ca87ffade45c891f533a36859a5eae325d3308\"" Jan 14 13:27:39.970973 containerd[1649]: time="2026-01-14T13:27:39.970918498Z" level=info msg="connecting to shim bd11c0b65544ebe4ed154bc196ca87ffade45c891f533a36859a5eae325d3308" address="unix:///run/containerd/s/4bdc8b1620274258a014deb0b9e2aea8459459022ecece8a1c4e853190a2a5bb" protocol=ttrpc version=3 Jan 14 13:27:40.021643 systemd[1]: Started cri-containerd-bd11c0b65544ebe4ed154bc196ca87ffade45c891f533a36859a5eae325d3308.scope - libcontainer container bd11c0b65544ebe4ed154bc196ca87ffade45c891f533a36859a5eae325d3308. Jan 14 13:27:40.044252 kubelet[2886]: I0114 13:27:40.044204 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qchm8\" (UniqueName: \"kubernetes.io/projected/1aba6152-e602-4d72-bf2a-fc2ab296ef2c-kube-api-access-qchm8\") pod \"tigera-operator-7dcd859c48-58z25\" (UID: \"1aba6152-e602-4d72-bf2a-fc2ab296ef2c\") " pod="tigera-operator/tigera-operator-7dcd859c48-58z25" Jan 14 13:27:40.044252 kubelet[2886]: I0114 13:27:40.044263 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1aba6152-e602-4d72-bf2a-fc2ab296ef2c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-58z25\" (UID: \"1aba6152-e602-4d72-bf2a-fc2ab296ef2c\") " pod="tigera-operator/tigera-operator-7dcd859c48-58z25" Jan 14 13:27:40.110000 audit: BPF prog-id=141 op=LOAD Jan 14 13:27:40.110000 audit[2990]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2953 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313163306236353534346562653465643135346263313936636138 Jan 14 13:27:40.110000 audit: BPF prog-id=142 op=LOAD Jan 14 13:27:40.110000 audit[2990]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2953 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313163306236353534346562653465643135346263313936636138 Jan 14 13:27:40.110000 audit: BPF prog-id=142 op=UNLOAD Jan 14 13:27:40.110000 audit[2990]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313163306236353534346562653465643135346263313936636138 Jan 14 13:27:40.110000 audit: BPF prog-id=141 op=UNLOAD Jan 14 13:27:40.110000 audit[2990]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313163306236353534346562653465643135346263313936636138 Jan 14 13:27:40.110000 audit: BPF prog-id=143 op=LOAD Jan 14 13:27:40.110000 audit[2990]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2953 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264313163306236353534346562653465643135346263313936636138 Jan 14 13:27:40.164026 containerd[1649]: time="2026-01-14T13:27:40.163993548Z" level=info msg="StartContainer for \"bd11c0b65544ebe4ed154bc196ca87ffade45c891f533a36859a5eae325d3308\" returns successfully" Jan 14 13:27:40.271881 containerd[1649]: time="2026-01-14T13:27:40.271848117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-58z25,Uid:1aba6152-e602-4d72-bf2a-fc2ab296ef2c,Namespace:tigera-operator,Attempt:0,}" Jan 14 13:27:40.321190 containerd[1649]: time="2026-01-14T13:27:40.320956175Z" level=info msg="connecting to shim b01bf6008318fa1d35585ac38686faec158dc1c0c71831d5e592351df85d61bd" address="unix:///run/containerd/s/f09edfdb7b909135d842624bb9204e9f056e36de92a11856c862ec5cd7bec266" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:27:40.409437 systemd[1]: Started cri-containerd-b01bf6008318fa1d35585ac38686faec158dc1c0c71831d5e592351df85d61bd.scope - libcontainer container b01bf6008318fa1d35585ac38686faec158dc1c0c71831d5e592351df85d61bd. Jan 14 13:27:40.435000 audit: BPF prog-id=144 op=LOAD Jan 14 13:27:40.437000 audit: BPF prog-id=145 op=LOAD Jan 14 13:27:40.437000 audit[3055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3038 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230316266363030383331386661316433353538356163333836383666 Jan 14 13:27:40.438000 audit: BPF prog-id=145 op=UNLOAD Jan 14 13:27:40.438000 audit[3055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230316266363030383331386661316433353538356163333836383666 Jan 14 13:27:40.439000 audit: BPF prog-id=146 op=LOAD Jan 14 13:27:40.439000 audit[3055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3038 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230316266363030383331386661316433353538356163333836383666 Jan 14 13:27:40.440000 audit: BPF prog-id=147 op=LOAD Jan 14 13:27:40.440000 audit[3055]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3038 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230316266363030383331386661316433353538356163333836383666 Jan 14 13:27:40.440000 audit: BPF prog-id=147 op=UNLOAD Jan 14 13:27:40.440000 audit[3055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230316266363030383331386661316433353538356163333836383666 Jan 14 13:27:40.440000 audit: BPF prog-id=146 op=UNLOAD Jan 14 13:27:40.440000 audit[3055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230316266363030383331386661316433353538356163333836383666 Jan 14 13:27:40.440000 audit: BPF prog-id=148 op=LOAD Jan 14 13:27:40.440000 audit[3055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3038 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230316266363030383331386661316433353538356163333836383666 Jan 14 13:27:40.473000 audit[3095]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.473000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8b396580 a2=0 a3=7ffd8b39656c items=0 ppid=3003 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.473000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 13:27:40.485000 audit[3098]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.485000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1e25aeb0 a2=0 a3=7ffe1e25ae9c items=0 ppid=3003 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.485000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 13:27:40.489000 audit[3097]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.489000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0eda7950 a2=0 a3=7ffc0eda793c items=0 ppid=3003 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 13:27:40.502000 audit[3102]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.502000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5df79930 a2=0 a3=7ffd5df7991c items=0 ppid=3003 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.502000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 13:27:40.505000 audit[3103]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.505000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd107471e0 a2=0 a3=7ffd107471cc items=0 ppid=3003 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.505000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 13:27:40.509000 audit[3106]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.509000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf941f400 a2=0 a3=7ffdf941f3ec items=0 ppid=3003 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 13:27:40.533958 containerd[1649]: time="2026-01-14T13:27:40.533617079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-58z25,Uid:1aba6152-e602-4d72-bf2a-fc2ab296ef2c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b01bf6008318fa1d35585ac38686faec158dc1c0c71831d5e592351df85d61bd\"" Jan 14 13:27:40.542941 containerd[1649]: time="2026-01-14T13:27:40.542661303Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 13:27:40.593000 audit[3112]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.593000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffd2580ae0 a2=0 a3=7fffd2580acc items=0 ppid=3003 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 13:27:40.602000 audit[3114]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.602000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe18256ec0 a2=0 a3=7ffe18256eac items=0 ppid=3003 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.602000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 13:27:40.615000 audit[3117]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.615000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd5307e070 a2=0 a3=7ffd5307e05c items=0 ppid=3003 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.615000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 13:27:40.620000 audit[3118]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.620000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb3ca0f90 a2=0 a3=7ffeb3ca0f7c items=0 ppid=3003 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.620000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 13:27:40.629000 audit[3120]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.629000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc47ad59a0 a2=0 a3=7ffc47ad598c items=0 ppid=3003 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.629000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 13:27:40.633000 audit[3121]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.633000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef1b32920 a2=0 a3=7ffef1b3290c items=0 ppid=3003 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 13:27:40.643000 audit[3123]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.643000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe2bd35e60 a2=0 a3=7ffe2bd35e4c items=0 ppid=3003 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 13:27:40.656000 audit[3126]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.656000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdae073ff0 a2=0 a3=7ffdae073fdc items=0 ppid=3003 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 13:27:40.661000 audit[3127]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.661000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc51aee880 a2=0 a3=7ffc51aee86c items=0 ppid=3003 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 13:27:40.671000 audit[3129]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.671000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff4f093f10 a2=0 a3=7fff4f093efc items=0 ppid=3003 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 13:27:40.675000 audit[3130]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.675000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9e364ce0 a2=0 a3=7ffc9e364ccc items=0 ppid=3003 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.675000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 13:27:40.684000 audit[3132]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.684000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff487d9dd0 a2=0 a3=7fff487d9dbc items=0 ppid=3003 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 13:27:40.699000 audit[3135]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.699000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc35e9db60 a2=0 a3=7ffc35e9db4c items=0 ppid=3003 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 13:27:40.713000 audit[3138]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.713000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff26435af0 a2=0 a3=7fff26435adc items=0 ppid=3003 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.713000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 13:27:40.717000 audit[3139]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.717000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe55a8abf0 a2=0 a3=7ffe55a8abdc items=0 ppid=3003 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 13:27:40.727000 audit[3141]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.727000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe419b0260 a2=0 a3=7ffe419b024c items=0 ppid=3003 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:27:40.741000 audit[3144]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.741000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcdb669f40 a2=0 a3=7ffcdb669f2c items=0 ppid=3003 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:27:40.745000 audit[3145]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.745000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5b8f9d40 a2=0 a3=7ffd5b8f9d2c items=0 ppid=3003 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 13:27:40.755000 audit[3147]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:27:40.755000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc23b95250 a2=0 a3=7ffc23b9523c items=0 ppid=3003 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 13:27:40.807000 audit[3153]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:40.807000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc861ea940 a2=0 a3=7ffc861ea92c items=0 ppid=3003 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:40.824000 audit[3153]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:40.824000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc861ea940 a2=0 a3=7ffc861ea92c items=0 ppid=3003 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:40.828000 audit[3158]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.828000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffc7bf0f70 a2=0 a3=7fffc7bf0f5c items=0 ppid=3003 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 13:27:40.838000 audit[3160]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.838000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffca625e490 a2=0 a3=7ffca625e47c items=0 ppid=3003 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.838000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 13:27:40.851000 audit[3163]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.851000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff61ce4b90 a2=0 a3=7fff61ce4b7c items=0 ppid=3003 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 13:27:40.856000 audit[3164]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.856000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff05f8cc0 a2=0 a3=7ffff05f8cac items=0 ppid=3003 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.856000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 13:27:40.865000 audit[3166]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.865000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb3708530 a2=0 a3=7fffb370851c items=0 ppid=3003 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 13:27:40.870000 audit[3167]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.870000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffffc923b0 a2=0 a3=7fffffc9239c items=0 ppid=3003 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 13:27:40.879000 audit[3169]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.879000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffddd353ce0 a2=0 a3=7ffddd353ccc items=0 ppid=3003 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 13:27:40.893000 audit[3172]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.893000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffefa3cfcc0 a2=0 a3=7ffefa3cfcac items=0 ppid=3003 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 13:27:40.897000 audit[3173]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.897000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef1a1c260 a2=0 a3=7ffef1a1c24c items=0 ppid=3003 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.897000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 13:27:40.907000 audit[3175]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.907000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc96344a80 a2=0 a3=7ffc96344a6c items=0 ppid=3003 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 13:27:40.911000 audit[3176]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.911000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4c7eff90 a2=0 a3=7fff4c7eff7c items=0 ppid=3003 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 13:27:40.922000 audit[3178]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.922000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff37954820 a2=0 a3=7fff3795480c items=0 ppid=3003 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 13:27:40.938000 audit[3181]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.938000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc3d77a810 a2=0 a3=7ffc3d77a7fc items=0 ppid=3003 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 13:27:40.949632 kubelet[2886]: E0114 13:27:40.949555 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:40.953000 audit[3184]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.953000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdac1e73a0 a2=0 a3=7ffdac1e738c items=0 ppid=3003 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 13:27:40.958000 audit[3185]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.958000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdd57eea00 a2=0 a3=7ffdd57ee9ec items=0 ppid=3003 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 13:27:40.972000 audit[3187]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.972000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe723e53b0 a2=0 a3=7ffe723e539c items=0 ppid=3003 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.972000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:27:40.985000 audit[3190]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.985000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff11d5d5a0 a2=0 a3=7fff11d5d58c items=0 ppid=3003 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.985000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:27:40.990000 audit[3191]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:40.990000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc430aef0 a2=0 a3=7ffcc430aedc items=0 ppid=3003 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:40.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 13:27:41.003000 audit[3193]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:41.003000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe9d7cb2b0 a2=0 a3=7ffe9d7cb29c items=0 ppid=3003 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:41.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 13:27:41.007000 audit[3194]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:41.007000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef049b7f0 a2=0 a3=7ffef049b7dc items=0 ppid=3003 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:41.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 13:27:41.015000 audit[3196]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:41.015000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd40cae510 a2=0 a3=7ffd40cae4fc items=0 ppid=3003 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:41.015000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:27:41.028000 audit[3199]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:27:41.028000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe110988c0 a2=0 a3=7ffe110988ac items=0 ppid=3003 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:41.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:27:41.038000 audit[3201]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 13:27:41.038000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff588757a0 a2=0 a3=7fff5887578c items=0 ppid=3003 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:41.038000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:41.038000 audit[3201]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 13:27:41.038000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff588757a0 a2=0 a3=7fff5887578c items=0 ppid=3003 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:41.038000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:41.757609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95654757.mount: Deactivated successfully. Jan 14 13:27:41.955679 kubelet[2886]: E0114 13:27:41.955317 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:43.225876 containerd[1649]: time="2026-01-14T13:27:43.225549359Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:43.227825 containerd[1649]: time="2026-01-14T13:27:43.227731735Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Jan 14 13:27:43.229595 containerd[1649]: time="2026-01-14T13:27:43.229432830Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:43.233465 containerd[1649]: time="2026-01-14T13:27:43.233322735Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:27:43.234288 containerd[1649]: time="2026-01-14T13:27:43.233935347Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.691180909s" Jan 14 13:27:43.234288 containerd[1649]: time="2026-01-14T13:27:43.234037621Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 13:27:43.241340 containerd[1649]: time="2026-01-14T13:27:43.241267393Z" level=info msg="CreateContainer within sandbox \"b01bf6008318fa1d35585ac38686faec158dc1c0c71831d5e592351df85d61bd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 13:27:43.256046 containerd[1649]: time="2026-01-14T13:27:43.255884612Z" level=info msg="Container 5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:27:43.259810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531830895.mount: Deactivated successfully. Jan 14 13:27:43.265571 containerd[1649]: time="2026-01-14T13:27:43.265361568Z" level=info msg="CreateContainer within sandbox \"b01bf6008318fa1d35585ac38686faec158dc1c0c71831d5e592351df85d61bd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64\"" Jan 14 13:27:43.266729 containerd[1649]: time="2026-01-14T13:27:43.266648000Z" level=info msg="StartContainer for \"5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64\"" Jan 14 13:27:43.268031 containerd[1649]: time="2026-01-14T13:27:43.267891200Z" level=info msg="connecting to shim 5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64" address="unix:///run/containerd/s/f09edfdb7b909135d842624bb9204e9f056e36de92a11856c862ec5cd7bec266" protocol=ttrpc version=3 Jan 14 13:27:43.311694 systemd[1]: Started cri-containerd-5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64.scope - libcontainer container 5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64. Jan 14 13:27:43.345000 audit: BPF prog-id=149 op=LOAD Jan 14 13:27:43.346000 audit: BPF prog-id=150 op=LOAD Jan 14 13:27:43.346000 audit[3210]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00015c238 a2=98 a3=0 items=0 ppid=3038 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:43.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363536376464623430343038323532306239616230306538313636 Jan 14 13:27:43.346000 audit: BPF prog-id=150 op=UNLOAD Jan 14 13:27:43.346000 audit[3210]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:43.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363536376464623430343038323532306239616230306538313636 Jan 14 13:27:43.347000 audit: BPF prog-id=151 op=LOAD Jan 14 13:27:43.347000 audit[3210]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00015c488 a2=98 a3=0 items=0 ppid=3038 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:43.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363536376464623430343038323532306239616230306538313636 Jan 14 13:27:43.348000 audit: BPF prog-id=152 op=LOAD Jan 14 13:27:43.348000 audit[3210]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00015c218 a2=98 a3=0 items=0 ppid=3038 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:43.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363536376464623430343038323532306239616230306538313636 Jan 14 13:27:43.348000 audit: BPF prog-id=152 op=UNLOAD Jan 14 13:27:43.348000 audit[3210]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:43.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363536376464623430343038323532306239616230306538313636 Jan 14 13:27:43.348000 audit: BPF prog-id=151 op=UNLOAD Jan 14 13:27:43.348000 audit[3210]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:43.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363536376464623430343038323532306239616230306538313636 Jan 14 13:27:43.349000 audit: BPF prog-id=153 op=LOAD Jan 14 13:27:43.349000 audit[3210]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00015c6e8 a2=98 a3=0 items=0 ppid=3038 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:43.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564363536376464623430343038323532306239616230306538313636 Jan 14 13:27:43.405213 containerd[1649]: time="2026-01-14T13:27:43.405060616Z" level=info msg="StartContainer for \"5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64\" returns successfully" Jan 14 13:27:43.980324 kubelet[2886]: I0114 13:27:43.979601 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cfmtr" podStartSLOduration=4.979582286 podStartE2EDuration="4.979582286s" podCreationTimestamp="2026-01-14 13:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:27:40.965826188 +0000 UTC m=+7.403447595" watchObservedRunningTime="2026-01-14 13:27:43.979582286 +0000 UTC m=+10.417203693" Jan 14 13:27:43.981574 kubelet[2886]: I0114 13:27:43.980367 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-58z25" podStartSLOduration=2.285841858 podStartE2EDuration="4.980359165s" podCreationTimestamp="2026-01-14 13:27:39 +0000 UTC" firstStartedPulling="2026-01-14 13:27:40.541255954 +0000 UTC m=+6.978877371" lastFinishedPulling="2026-01-14 13:27:43.235773272 +0000 UTC m=+9.673394678" observedRunningTime="2026-01-14 13:27:43.979990373 +0000 UTC m=+10.417611800" watchObservedRunningTime="2026-01-14 13:27:43.980359165 +0000 UTC m=+10.417980572" Jan 14 13:27:44.924244 kubelet[2886]: E0114 13:27:44.923747 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:44.974447 kubelet[2886]: E0114 13:27:44.974016 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:50.340000 audit[1904]: USER_END pid=1904 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:27:50.341530 sudo[1904]: pam_unix(sudo:session): session closed for user root Jan 14 13:27:50.346993 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 13:27:50.347067 kernel: audit: type=1106 audit(1768397270.340:514): pid=1904 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:27:50.367000 audit[1904]: CRED_DISP pid=1904 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:27:50.391197 kernel: audit: type=1104 audit(1768397270.367:515): pid=1904 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:27:50.396571 sshd[1903]: Connection closed by 10.0.0.1 port 41598 Jan 14 13:27:50.398315 sshd-session[1898]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:50.401000 audit[1898]: USER_END pid=1898 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:27:50.406806 systemd-logind[1630]: Session 10 logged out. Waiting for processes to exit. Jan 14 13:27:50.411841 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:41598.service: Deactivated successfully. Jan 14 13:27:50.421408 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 13:27:50.422009 systemd[1]: session-10.scope: Consumed 7.686s CPU time, 215M memory peak. Jan 14 13:27:50.435289 kernel: audit: type=1106 audit(1768397270.401:516): pid=1898 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:27:50.401000 audit[1898]: CRED_DISP pid=1898 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:27:50.436487 systemd-logind[1630]: Removed session 10. Jan 14 13:27:50.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.26:22-10.0.0.1:41598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:50.481850 kernel: audit: type=1104 audit(1768397270.401:517): pid=1898 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:27:50.482465 kernel: audit: type=1131 audit(1768397270.413:518): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.26:22-10.0.0.1:41598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:27:50.980000 audit[3300]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:51.001366 kernel: audit: type=1325 audit(1768397270.980:519): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:50.980000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff069b0000 a2=0 a3=7fff069affec items=0 ppid=3003 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:51.042580 kernel: audit: type=1300 audit(1768397270.980:519): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff069b0000 a2=0 a3=7fff069affec items=0 ppid=3003 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:51.042747 kernel: audit: type=1327 audit(1768397270.980:519): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:50.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:51.003000 audit[3300]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:51.056185 kernel: audit: type=1325 audit(1768397271.003:520): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:51.003000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff069b0000 a2=0 a3=0 items=0 ppid=3003 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:51.084890 kernel: audit: type=1300 audit(1768397271.003:520): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff069b0000 a2=0 a3=0 items=0 ppid=3003 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:51.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:51.110000 audit[3302]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:51.110000 audit[3302]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcd3cb03e0 a2=0 a3=7ffcd3cb03cc items=0 ppid=3003 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:51.110000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:51.122000 audit[3302]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:51.122000 audit[3302]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd3cb03e0 a2=0 a3=0 items=0 ppid=3003 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:51.122000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:54.247000 audit[3307]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:54.247000 audit[3307]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc23cdcaa0 a2=0 a3=7ffc23cdca8c items=0 ppid=3003 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:54.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:54.259000 audit[3307]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:54.259000 audit[3307]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc23cdcaa0 a2=0 a3=0 items=0 ppid=3003 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:54.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:54.303000 audit[3309]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:54.303000 audit[3309]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff098f82f0 a2=0 a3=7fff098f82dc items=0 ppid=3003 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:54.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:54.312000 audit[3309]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:54.312000 audit[3309]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff098f82f0 a2=0 a3=0 items=0 ppid=3003 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:54.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:55.338000 audit[3311]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:55.338000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe2129c190 a2=0 a3=7ffe2129c17c items=0 ppid=3003 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:55.338000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:55.347000 audit[3311]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:55.364248 kernel: kauditd_printk_skb: 22 callbacks suppressed Jan 14 13:27:55.364308 kernel: audit: type=1325 audit(1768397275.347:528): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:55.347000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe2129c190 a2=0 a3=0 items=0 ppid=3003 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:55.397389 kernel: audit: type=1300 audit(1768397275.347:528): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe2129c190 a2=0 a3=0 items=0 ppid=3003 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:55.397501 kernel: audit: type=1327 audit(1768397275.347:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:55.347000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:56.738980 systemd[1]: Created slice kubepods-besteffort-pod9cc46bcf_82b6_4ed3_aef1_570a57e4485c.slice - libcontainer container kubepods-besteffort-pod9cc46bcf_82b6_4ed3_aef1_570a57e4485c.slice. Jan 14 13:27:56.763000 audit[3313]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:56.820944 kubelet[2886]: I0114 13:27:56.809634 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9cc46bcf-82b6-4ed3-aef1-570a57e4485c-typha-certs\") pod \"calico-typha-789f445fbc-j7w85\" (UID: \"9cc46bcf-82b6-4ed3-aef1-570a57e4485c\") " pod="calico-system/calico-typha-789f445fbc-j7w85" Jan 14 13:27:56.820944 kubelet[2886]: I0114 13:27:56.809687 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cc46bcf-82b6-4ed3-aef1-570a57e4485c-tigera-ca-bundle\") pod \"calico-typha-789f445fbc-j7w85\" (UID: \"9cc46bcf-82b6-4ed3-aef1-570a57e4485c\") " pod="calico-system/calico-typha-789f445fbc-j7w85" Jan 14 13:27:56.820944 kubelet[2886]: I0114 13:27:56.809722 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhsqx\" (UniqueName: \"kubernetes.io/projected/9cc46bcf-82b6-4ed3-aef1-570a57e4485c-kube-api-access-lhsqx\") pod \"calico-typha-789f445fbc-j7w85\" (UID: \"9cc46bcf-82b6-4ed3-aef1-570a57e4485c\") " pod="calico-system/calico-typha-789f445fbc-j7w85" Jan 14 13:27:56.763000 audit[3313]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffddd2999a0 a2=0 a3=7ffddd29998c items=0 ppid=3003 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:56.868343 kernel: audit: type=1325 audit(1768397276.763:529): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:56.868413 kernel: audit: type=1300 audit(1768397276.763:529): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffddd2999a0 a2=0 a3=7ffddd29998c items=0 ppid=3003 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:56.868434 kernel: audit: type=1327 audit(1768397276.763:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:56.763000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:56.868000 audit[3313]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:56.901937 kernel: audit: type=1325 audit(1768397276.868:530): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:56.902025 kernel: audit: type=1300 audit(1768397276.868:530): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffddd2999a0 a2=0 a3=0 items=0 ppid=3003 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:56.868000 audit[3313]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffddd2999a0 a2=0 a3=0 items=0 ppid=3003 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:56.936875 kernel: audit: type=1327 audit(1768397276.868:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:56.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:57.012942 systemd[1]: Created slice kubepods-besteffort-pod713ca1dd_2a9c_45b6_9158_f1c151f32e67.slice - libcontainer container kubepods-besteffort-pod713ca1dd_2a9c_45b6_9158_f1c151f32e67.slice. Jan 14 13:27:57.054505 kubelet[2886]: E0114 13:27:57.053656 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:57.055708 containerd[1649]: time="2026-01-14T13:27:57.055669795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-789f445fbc-j7w85,Uid:9cc46bcf-82b6-4ed3-aef1-570a57e4485c,Namespace:calico-system,Attempt:0,}" Jan 14 13:27:57.116550 kubelet[2886]: I0114 13:27:57.115778 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/713ca1dd-2a9c-45b6-9158-f1c151f32e67-flexvol-driver-host\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.116550 kubelet[2886]: I0114 13:27:57.115985 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/713ca1dd-2a9c-45b6-9158-f1c151f32e67-cni-log-dir\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.116550 kubelet[2886]: I0114 13:27:57.116008 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/713ca1dd-2a9c-45b6-9158-f1c151f32e67-tigera-ca-bundle\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.116550 kubelet[2886]: I0114 13:27:57.116033 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/713ca1dd-2a9c-45b6-9158-f1c151f32e67-lib-modules\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.116550 kubelet[2886]: I0114 13:27:57.116054 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/713ca1dd-2a9c-45b6-9158-f1c151f32e67-var-run-calico\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.117044 kubelet[2886]: I0114 13:27:57.116271 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/713ca1dd-2a9c-45b6-9158-f1c151f32e67-xtables-lock\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.117044 kubelet[2886]: I0114 13:27:57.116301 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdvbq\" (UniqueName: \"kubernetes.io/projected/713ca1dd-2a9c-45b6-9158-f1c151f32e67-kube-api-access-mdvbq\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.117044 kubelet[2886]: I0114 13:27:57.116320 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/713ca1dd-2a9c-45b6-9158-f1c151f32e67-cni-bin-dir\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.117044 kubelet[2886]: I0114 13:27:57.116333 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/713ca1dd-2a9c-45b6-9158-f1c151f32e67-cni-net-dir\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.117044 kubelet[2886]: I0114 13:27:57.116353 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/713ca1dd-2a9c-45b6-9158-f1c151f32e67-var-lib-calico\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.117393 kubelet[2886]: I0114 13:27:57.116367 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/713ca1dd-2a9c-45b6-9158-f1c151f32e67-node-certs\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.117393 kubelet[2886]: I0114 13:27:57.116380 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/713ca1dd-2a9c-45b6-9158-f1c151f32e67-policysync\") pod \"calico-node-phls5\" (UID: \"713ca1dd-2a9c-45b6-9158-f1c151f32e67\") " pod="calico-system/calico-node-phls5" Jan 14 13:27:57.161404 containerd[1649]: time="2026-01-14T13:27:57.160500155Z" level=info msg="connecting to shim 245c1e97c8484b264b81f3810b7cec769898f9ce9974e8e775ca2cab09f73403" address="unix:///run/containerd/s/a2c98f0546e7a02b3e7c82baabbeb12d8efd1b0d171b70c799fbd374267224c9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:27:57.174632 kubelet[2886]: E0114 13:27:57.174493 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:27:57.217701 kubelet[2886]: I0114 13:27:57.217482 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8200b33d-eb45-4c93-98d1-0c3029a31280-kubelet-dir\") pod \"csi-node-driver-ckktd\" (UID: \"8200b33d-eb45-4c93-98d1-0c3029a31280\") " pod="calico-system/csi-node-driver-ckktd" Jan 14 13:27:57.217701 kubelet[2886]: I0114 13:27:57.217597 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8200b33d-eb45-4c93-98d1-0c3029a31280-socket-dir\") pod \"csi-node-driver-ckktd\" (UID: \"8200b33d-eb45-4c93-98d1-0c3029a31280\") " pod="calico-system/csi-node-driver-ckktd" Jan 14 13:27:57.217701 kubelet[2886]: I0114 13:27:57.217613 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8200b33d-eb45-4c93-98d1-0c3029a31280-varrun\") pod \"csi-node-driver-ckktd\" (UID: \"8200b33d-eb45-4c93-98d1-0c3029a31280\") " pod="calico-system/csi-node-driver-ckktd" Jan 14 13:27:57.217701 kubelet[2886]: I0114 13:27:57.217678 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8200b33d-eb45-4c93-98d1-0c3029a31280-registration-dir\") pod \"csi-node-driver-ckktd\" (UID: \"8200b33d-eb45-4c93-98d1-0c3029a31280\") " pod="calico-system/csi-node-driver-ckktd" Jan 14 13:27:57.217701 kubelet[2886]: I0114 13:27:57.217694 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4xm2\" (UniqueName: \"kubernetes.io/projected/8200b33d-eb45-4c93-98d1-0c3029a31280-kube-api-access-j4xm2\") pod \"csi-node-driver-ckktd\" (UID: \"8200b33d-eb45-4c93-98d1-0c3029a31280\") " pod="calico-system/csi-node-driver-ckktd" Jan 14 13:27:57.227019 kubelet[2886]: E0114 13:27:57.222292 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.227019 kubelet[2886]: W0114 13:27:57.222395 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.227019 kubelet[2886]: E0114 13:27:57.222417 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.227019 kubelet[2886]: E0114 13:27:57.222650 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.227019 kubelet[2886]: W0114 13:27:57.223656 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.227019 kubelet[2886]: E0114 13:27:57.223677 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.237294 kubelet[2886]: E0114 13:27:57.235015 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.237294 kubelet[2886]: W0114 13:27:57.235294 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.237294 kubelet[2886]: E0114 13:27:57.235314 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.240971 kubelet[2886]: E0114 13:27:57.240464 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.240971 kubelet[2886]: W0114 13:27:57.240563 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.240971 kubelet[2886]: E0114 13:27:57.240580 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.246618 kubelet[2886]: E0114 13:27:57.246389 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.246618 kubelet[2886]: W0114 13:27:57.246409 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.246618 kubelet[2886]: E0114 13:27:57.246424 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.250307 kubelet[2886]: E0114 13:27:57.249557 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.250307 kubelet[2886]: W0114 13:27:57.249574 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.250307 kubelet[2886]: E0114 13:27:57.249591 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.250607 kubelet[2886]: E0114 13:27:57.250509 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.250607 kubelet[2886]: W0114 13:27:57.250600 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.250662 kubelet[2886]: E0114 13:27:57.250614 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.252325 kubelet[2886]: E0114 13:27:57.251697 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.252325 kubelet[2886]: W0114 13:27:57.251711 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.252325 kubelet[2886]: E0114 13:27:57.251724 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.266458 kubelet[2886]: E0114 13:27:57.266067 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.267962 kubelet[2886]: W0114 13:27:57.267902 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.267962 kubelet[2886]: E0114 13:27:57.267928 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.284675 systemd[1]: Started cri-containerd-245c1e97c8484b264b81f3810b7cec769898f9ce9974e8e775ca2cab09f73403.scope - libcontainer container 245c1e97c8484b264b81f3810b7cec769898f9ce9974e8e775ca2cab09f73403. Jan 14 13:27:57.319041 kubelet[2886]: E0114 13:27:57.319010 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:57.321994 kubelet[2886]: E0114 13:27:57.321503 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.321994 kubelet[2886]: W0114 13:27:57.321523 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.321994 kubelet[2886]: E0114 13:27:57.321543 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.323595 containerd[1649]: time="2026-01-14T13:27:57.323553653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-phls5,Uid:713ca1dd-2a9c-45b6-9158-f1c151f32e67,Namespace:calico-system,Attempt:0,}" Jan 14 13:27:57.328396 kubelet[2886]: E0114 13:27:57.327321 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.328396 kubelet[2886]: W0114 13:27:57.327406 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.328396 kubelet[2886]: E0114 13:27:57.327422 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.335189 kubelet[2886]: E0114 13:27:57.334298 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.335189 kubelet[2886]: W0114 13:27:57.334312 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.335189 kubelet[2886]: E0114 13:27:57.334325 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.337512 kubelet[2886]: E0114 13:27:57.336982 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.337512 kubelet[2886]: W0114 13:27:57.337216 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.337512 kubelet[2886]: E0114 13:27:57.337231 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.345037 kubelet[2886]: E0114 13:27:57.344469 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.345037 kubelet[2886]: W0114 13:27:57.344570 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.345037 kubelet[2886]: E0114 13:27:57.344585 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.350948 kubelet[2886]: E0114 13:27:57.350563 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.350948 kubelet[2886]: W0114 13:27:57.350653 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.350948 kubelet[2886]: E0114 13:27:57.350663 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.354424 kubelet[2886]: E0114 13:27:57.354322 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.354424 kubelet[2886]: W0114 13:27:57.354405 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.354424 kubelet[2886]: E0114 13:27:57.354415 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.357748 kubelet[2886]: E0114 13:27:57.357531 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.357748 kubelet[2886]: W0114 13:27:57.357544 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.357748 kubelet[2886]: E0114 13:27:57.357554 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.359760 kubelet[2886]: E0114 13:27:57.359693 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.359760 kubelet[2886]: W0114 13:27:57.359708 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.359760 kubelet[2886]: E0114 13:27:57.359724 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.363400 kubelet[2886]: E0114 13:27:57.362989 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.363400 kubelet[2886]: W0114 13:27:57.363067 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.363400 kubelet[2886]: E0114 13:27:57.363243 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.365561 kubelet[2886]: E0114 13:27:57.364993 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.365561 kubelet[2886]: W0114 13:27:57.365005 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.365561 kubelet[2886]: E0114 13:27:57.365015 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.368715 kubelet[2886]: E0114 13:27:57.368676 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.368715 kubelet[2886]: W0114 13:27:57.368690 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.368715 kubelet[2886]: E0114 13:27:57.368699 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.372597 kubelet[2886]: E0114 13:27:57.372416 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.372597 kubelet[2886]: W0114 13:27:57.372515 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.372597 kubelet[2886]: E0114 13:27:57.372527 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.376354 kubelet[2886]: E0114 13:27:57.376045 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.377364 kubelet[2886]: W0114 13:27:57.377235 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.377364 kubelet[2886]: E0114 13:27:57.377340 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.379958 kubelet[2886]: E0114 13:27:57.379727 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.380006 kubelet[2886]: W0114 13:27:57.379959 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.380006 kubelet[2886]: E0114 13:27:57.379974 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.382915 kubelet[2886]: E0114 13:27:57.382676 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.382915 kubelet[2886]: W0114 13:27:57.382770 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.382915 kubelet[2886]: E0114 13:27:57.382781 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.388235 kubelet[2886]: E0114 13:27:57.387025 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.388235 kubelet[2886]: W0114 13:27:57.387038 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.388235 kubelet[2886]: E0114 13:27:57.387048 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.392715 kubelet[2886]: E0114 13:27:57.392410 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.392715 kubelet[2886]: W0114 13:27:57.392492 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.392715 kubelet[2886]: E0114 13:27:57.392503 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.393556 kubelet[2886]: E0114 13:27:57.393403 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.393556 kubelet[2886]: W0114 13:27:57.393484 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.393556 kubelet[2886]: E0114 13:27:57.393495 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.401759 kubelet[2886]: E0114 13:27:57.401710 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.401759 kubelet[2886]: W0114 13:27:57.401724 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.401759 kubelet[2886]: E0114 13:27:57.401737 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.406403 kubelet[2886]: E0114 13:27:57.406386 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.408666 kubelet[2886]: W0114 13:27:57.406600 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.408666 kubelet[2886]: E0114 13:27:57.406616 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.411005 kubelet[2886]: E0114 13:27:57.410694 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.411005 kubelet[2886]: W0114 13:27:57.410709 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.411005 kubelet[2886]: E0114 13:27:57.410721 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.411595 kubelet[2886]: E0114 13:27:57.411579 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.411681 kubelet[2886]: W0114 13:27:57.411665 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.411763 kubelet[2886]: E0114 13:27:57.411744 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.413243 kubelet[2886]: E0114 13:27:57.413059 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.413444 containerd[1649]: time="2026-01-14T13:27:57.413404764Z" level=info msg="connecting to shim 1be9bf10005bd36a6d7aa32931e00ab0c810336bcc9e26bdf598f99a2fe8a547" address="unix:///run/containerd/s/f334733878ea59d9a0602a8ed04f104c05be73862c9b4807aea801bb38a692f3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:27:57.413562 kubelet[2886]: W0114 13:27:57.413526 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.413562 kubelet[2886]: E0114 13:27:57.413547 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.415279 kubelet[2886]: E0114 13:27:57.415051 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.415279 kubelet[2886]: W0114 13:27:57.415064 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.415279 kubelet[2886]: E0114 13:27:57.415247 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.452884 kubelet[2886]: E0114 13:27:57.452771 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:27:57.453269 kubelet[2886]: W0114 13:27:57.453041 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:27:57.453269 kubelet[2886]: E0114 13:27:57.453070 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:27:57.541000 audit: BPF prog-id=154 op=LOAD Jan 14 13:27:57.553356 kernel: audit: type=1334 audit(1768397277.541:531): prog-id=154 op=LOAD Jan 14 13:27:57.547000 audit: BPF prog-id=155 op=LOAD Jan 14 13:27:57.547000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234356331653937633834383462323634623831663338313062376365 Jan 14 13:27:57.547000 audit: BPF prog-id=155 op=UNLOAD Jan 14 13:27:57.547000 audit[3335]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234356331653937633834383462323634623831663338313062376365 Jan 14 13:27:57.561000 audit: BPF prog-id=156 op=LOAD Jan 14 13:27:57.561000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234356331653937633834383462323634623831663338313062376365 Jan 14 13:27:57.561000 audit: BPF prog-id=157 op=LOAD Jan 14 13:27:57.561000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234356331653937633834383462323634623831663338313062376365 Jan 14 13:27:57.563000 audit: BPF prog-id=157 op=UNLOAD Jan 14 13:27:57.563000 audit[3335]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234356331653937633834383462323634623831663338313062376365 Jan 14 13:27:57.563000 audit: BPF prog-id=156 op=UNLOAD Jan 14 13:27:57.564066 systemd[1]: Started cri-containerd-1be9bf10005bd36a6d7aa32931e00ab0c810336bcc9e26bdf598f99a2fe8a547.scope - libcontainer container 1be9bf10005bd36a6d7aa32931e00ab0c810336bcc9e26bdf598f99a2fe8a547. Jan 14 13:27:57.563000 audit[3335]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234356331653937633834383462323634623831663338313062376365 Jan 14 13:27:57.563000 audit: BPF prog-id=158 op=LOAD Jan 14 13:27:57.563000 audit[3335]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3324 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234356331653937633834383462323634623831663338313062376365 Jan 14 13:27:57.679000 audit: BPF prog-id=159 op=LOAD Jan 14 13:27:57.680000 audit: BPF prog-id=160 op=LOAD Jan 14 13:27:57.680000 audit[3413]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3394 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653962663130303035626433366136643761613332393331653030 Jan 14 13:27:57.681000 audit: BPF prog-id=160 op=UNLOAD Jan 14 13:27:57.681000 audit[3413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653962663130303035626433366136643761613332393331653030 Jan 14 13:27:57.682000 audit: BPF prog-id=161 op=LOAD Jan 14 13:27:57.682000 audit[3413]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3394 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653962663130303035626433366136643761613332393331653030 Jan 14 13:27:57.682000 audit: BPF prog-id=162 op=LOAD Jan 14 13:27:57.682000 audit[3413]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3394 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653962663130303035626433366136643761613332393331653030 Jan 14 13:27:57.682000 audit: BPF prog-id=162 op=UNLOAD Jan 14 13:27:57.682000 audit[3413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653962663130303035626433366136643761613332393331653030 Jan 14 13:27:57.682000 audit: BPF prog-id=161 op=UNLOAD Jan 14 13:27:57.682000 audit[3413]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653962663130303035626433366136643761613332393331653030 Jan 14 13:27:57.682000 audit: BPF prog-id=163 op=LOAD Jan 14 13:27:57.682000 audit[3413]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3394 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162653962663130303035626433366136643761613332393331653030 Jan 14 13:27:57.719528 containerd[1649]: time="2026-01-14T13:27:57.718734949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-789f445fbc-j7w85,Uid:9cc46bcf-82b6-4ed3-aef1-570a57e4485c,Namespace:calico-system,Attempt:0,} returns sandbox id \"245c1e97c8484b264b81f3810b7cec769898f9ce9974e8e775ca2cab09f73403\"" Jan 14 13:27:57.733607 kubelet[2886]: E0114 13:27:57.733420 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:57.749200 containerd[1649]: time="2026-01-14T13:27:57.748729296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 13:27:57.793353 containerd[1649]: time="2026-01-14T13:27:57.793306126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-phls5,Uid:713ca1dd-2a9c-45b6-9158-f1c151f32e67,Namespace:calico-system,Attempt:0,} returns sandbox id \"1be9bf10005bd36a6d7aa32931e00ab0c810336bcc9e26bdf598f99a2fe8a547\"" Jan 14 13:27:57.796662 kubelet[2886]: E0114 13:27:57.796641 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:27:57.987000 audit[3444]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3444 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:57.987000 audit[3444]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc01426cc0 a2=0 a3=7ffc01426cac items=0 ppid=3003 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.987000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:57.998000 audit[3444]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3444 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:27:57.998000 audit[3444]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc01426cc0 a2=0 a3=0 items=0 ppid=3003 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:27:57.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:27:58.526585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3186622331.mount: Deactivated successfully. Jan 14 13:27:58.872797 kubelet[2886]: E0114 13:27:58.872354 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:28:00.871253 kubelet[2886]: E0114 13:28:00.871040 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:28:01.185517 containerd[1649]: time="2026-01-14T13:28:01.184768494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:01.189003 containerd[1649]: time="2026-01-14T13:28:01.188640671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 13:28:01.192358 containerd[1649]: time="2026-01-14T13:28:01.191786366Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:01.197578 containerd[1649]: time="2026-01-14T13:28:01.197271348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:01.198509 containerd[1649]: time="2026-01-14T13:28:01.198403515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.449373916s" Jan 14 13:28:01.198509 containerd[1649]: time="2026-01-14T13:28:01.198514985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 13:28:01.202286 containerd[1649]: time="2026-01-14T13:28:01.201820288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 13:28:01.239976 containerd[1649]: time="2026-01-14T13:28:01.239942483Z" level=info msg="CreateContainer within sandbox \"245c1e97c8484b264b81f3810b7cec769898f9ce9974e8e775ca2cab09f73403\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 13:28:01.271467 containerd[1649]: time="2026-01-14T13:28:01.271328696Z" level=info msg="Container 6fcda587c27c8b11844a61f2f6108600a027d49ecd563f2bef1036c300663722: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:28:01.289226 containerd[1649]: time="2026-01-14T13:28:01.288927794Z" level=info msg="CreateContainer within sandbox \"245c1e97c8484b264b81f3810b7cec769898f9ce9974e8e775ca2cab09f73403\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6fcda587c27c8b11844a61f2f6108600a027d49ecd563f2bef1036c300663722\"" Jan 14 13:28:01.292221 containerd[1649]: time="2026-01-14T13:28:01.291661959Z" level=info msg="StartContainer for \"6fcda587c27c8b11844a61f2f6108600a027d49ecd563f2bef1036c300663722\"" Jan 14 13:28:01.294830 containerd[1649]: time="2026-01-14T13:28:01.294251912Z" level=info msg="connecting to shim 6fcda587c27c8b11844a61f2f6108600a027d49ecd563f2bef1036c300663722" address="unix:///run/containerd/s/a2c98f0546e7a02b3e7c82baabbeb12d8efd1b0d171b70c799fbd374267224c9" protocol=ttrpc version=3 Jan 14 13:28:01.357340 systemd[1]: Started cri-containerd-6fcda587c27c8b11844a61f2f6108600a027d49ecd563f2bef1036c300663722.scope - libcontainer container 6fcda587c27c8b11844a61f2f6108600a027d49ecd563f2bef1036c300663722. Jan 14 13:28:01.393000 audit: BPF prog-id=164 op=LOAD Jan 14 13:28:01.412626 kernel: kauditd_printk_skb: 49 callbacks suppressed Jan 14 13:28:01.412765 kernel: audit: type=1334 audit(1768397281.393:549): prog-id=164 op=LOAD Jan 14 13:28:01.412807 kernel: audit: type=1334 audit(1768397281.395:550): prog-id=165 op=LOAD Jan 14 13:28:01.395000 audit: BPF prog-id=165 op=LOAD Jan 14 13:28:01.421737 kernel: audit: type=1300 audit(1768397281.395:550): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3324 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:01.395000 audit[3456]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3324 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:01.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636461353837633237633862313138343461363166326636313038 Jan 14 13:28:01.486673 kernel: audit: type=1327 audit(1768397281.395:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636461353837633237633862313138343461363166326636313038 Jan 14 13:28:01.395000 audit: BPF prog-id=165 op=UNLOAD Jan 14 13:28:01.496304 kernel: audit: type=1334 audit(1768397281.395:551): prog-id=165 op=UNLOAD Jan 14 13:28:01.395000 audit[3456]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:01.527834 kernel: audit: type=1300 audit(1768397281.395:551): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:01.528390 kernel: audit: type=1327 audit(1768397281.395:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636461353837633237633862313138343461363166326636313038 Jan 14 13:28:01.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636461353837633237633862313138343461363166326636313038 Jan 14 13:28:01.395000 audit: BPF prog-id=166 op=LOAD Jan 14 13:28:01.569250 kernel: audit: type=1334 audit(1768397281.395:552): prog-id=166 op=LOAD Jan 14 13:28:01.569321 kernel: audit: type=1300 audit(1768397281.395:552): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3324 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:01.395000 audit[3456]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3324 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:01.577195 containerd[1649]: time="2026-01-14T13:28:01.577039866Z" level=info msg="StartContainer for \"6fcda587c27c8b11844a61f2f6108600a027d49ecd563f2bef1036c300663722\" returns successfully" Jan 14 13:28:01.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636461353837633237633862313138343461363166326636313038 Jan 14 13:28:01.621799 kernel: audit: type=1327 audit(1768397281.395:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636461353837633237633862313138343461363166326636313038 Jan 14 13:28:01.396000 audit: BPF prog-id=167 op=LOAD Jan 14 13:28:01.396000 audit[3456]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3324 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:01.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636461353837633237633862313138343461363166326636313038 Jan 14 13:28:01.396000 audit: BPF prog-id=167 op=UNLOAD Jan 14 13:28:01.396000 audit[3456]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:01.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636461353837633237633862313138343461363166326636313038 Jan 14 13:28:01.396000 audit: BPF prog-id=166 op=UNLOAD Jan 14 13:28:01.396000 audit[3456]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3324 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:01.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636461353837633237633862313138343461363166326636313038 Jan 14 13:28:01.396000 audit: BPF prog-id=168 op=LOAD Jan 14 13:28:01.396000 audit[3456]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3324 pid=3456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:01.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666636461353837633237633862313138343461363166326636313038 Jan 14 13:28:02.082227 kubelet[2886]: E0114 13:28:02.081458 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:02.146408 kubelet[2886]: E0114 13:28:02.146301 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.146408 kubelet[2886]: W0114 13:28:02.146323 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.146408 kubelet[2886]: E0114 13:28:02.146344 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.147656 kubelet[2886]: E0114 13:28:02.147593 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.147656 kubelet[2886]: W0114 13:28:02.147606 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.147656 kubelet[2886]: E0114 13:28:02.147616 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.149462 kubelet[2886]: E0114 13:28:02.149045 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.149462 kubelet[2886]: W0114 13:28:02.149056 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.149462 kubelet[2886]: E0114 13:28:02.149065 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.150978 kubelet[2886]: E0114 13:28:02.150964 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.151055 kubelet[2886]: W0114 13:28:02.151043 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.151277 kubelet[2886]: E0114 13:28:02.151259 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.152450 kubelet[2886]: E0114 13:28:02.152436 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.152526 kubelet[2886]: W0114 13:28:02.152510 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.152573 kubelet[2886]: E0114 13:28:02.152564 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.153069 kubelet[2886]: E0114 13:28:02.153057 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.153356 kubelet[2886]: W0114 13:28:02.153287 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.153356 kubelet[2886]: E0114 13:28:02.153307 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.154680 kubelet[2886]: E0114 13:28:02.154464 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.154680 kubelet[2886]: W0114 13:28:02.154480 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.154680 kubelet[2886]: E0114 13:28:02.154494 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.155234 kubelet[2886]: E0114 13:28:02.155219 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.155396 kubelet[2886]: W0114 13:28:02.155381 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.155458 kubelet[2886]: E0114 13:28:02.155445 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.156389 kubelet[2886]: E0114 13:28:02.156309 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.156389 kubelet[2886]: W0114 13:28:02.156324 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.156389 kubelet[2886]: E0114 13:28:02.156335 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.157487 kubelet[2886]: E0114 13:28:02.157411 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.157487 kubelet[2886]: W0114 13:28:02.157427 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.157487 kubelet[2886]: E0114 13:28:02.157439 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.158321 kubelet[2886]: E0114 13:28:02.158231 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.158321 kubelet[2886]: W0114 13:28:02.158252 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.158321 kubelet[2886]: E0114 13:28:02.158266 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.159453 kubelet[2886]: E0114 13:28:02.159439 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.159527 kubelet[2886]: W0114 13:28:02.159514 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.159582 kubelet[2886]: E0114 13:28:02.159573 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.160290 kubelet[2886]: E0114 13:28:02.160214 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.160290 kubelet[2886]: W0114 13:28:02.160226 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.160290 kubelet[2886]: E0114 13:28:02.160236 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.161253 kubelet[2886]: E0114 13:28:02.160994 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.161253 kubelet[2886]: W0114 13:28:02.161008 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.161253 kubelet[2886]: E0114 13:28:02.161020 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.161588 kubelet[2886]: E0114 13:28:02.161574 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.161654 kubelet[2886]: W0114 13:28:02.161641 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.161722 kubelet[2886]: E0114 13:28:02.161711 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.221534 kubelet[2886]: E0114 13:28:02.221268 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.221534 kubelet[2886]: W0114 13:28:02.221495 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.221534 kubelet[2886]: E0114 13:28:02.221520 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.222382 kubelet[2886]: E0114 13:28:02.221831 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.222382 kubelet[2886]: W0114 13:28:02.221999 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.222382 kubelet[2886]: E0114 13:28:02.222011 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.222382 kubelet[2886]: E0114 13:28:02.222347 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.222382 kubelet[2886]: W0114 13:28:02.222356 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.222382 kubelet[2886]: E0114 13:28:02.222364 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.222782 kubelet[2886]: E0114 13:28:02.222684 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.222782 kubelet[2886]: W0114 13:28:02.222779 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.222782 kubelet[2886]: E0114 13:28:02.222793 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.225364 kubelet[2886]: E0114 13:28:02.225023 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.225781 kubelet[2886]: W0114 13:28:02.225686 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.225839 kubelet[2886]: E0114 13:28:02.225786 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.227346 kubelet[2886]: E0114 13:28:02.227028 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.227537 kubelet[2886]: W0114 13:28:02.227456 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.227537 kubelet[2886]: E0114 13:28:02.227532 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.229451 kubelet[2886]: E0114 13:28:02.229303 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.229656 kubelet[2886]: W0114 13:28:02.229570 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.229656 kubelet[2886]: E0114 13:28:02.229647 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.235715 kubelet[2886]: E0114 13:28:02.235452 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.235715 kubelet[2886]: W0114 13:28:02.235537 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.235715 kubelet[2886]: E0114 13:28:02.235551 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.237281 kubelet[2886]: E0114 13:28:02.236788 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.237281 kubelet[2886]: W0114 13:28:02.236971 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.237281 kubelet[2886]: E0114 13:28:02.236985 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.238292 kubelet[2886]: E0114 13:28:02.237627 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.238292 kubelet[2886]: W0114 13:28:02.237723 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.238292 kubelet[2886]: E0114 13:28:02.237738 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.239694 kubelet[2886]: E0114 13:28:02.239612 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.239694 kubelet[2886]: W0114 13:28:02.239684 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.239768 kubelet[2886]: E0114 13:28:02.239718 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.241840 kubelet[2886]: E0114 13:28:02.240811 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.241840 kubelet[2886]: W0114 13:28:02.241309 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.241840 kubelet[2886]: E0114 13:28:02.241321 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.241840 kubelet[2886]: E0114 13:28:02.241737 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.241840 kubelet[2886]: W0114 13:28:02.241745 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.242362 kubelet[2886]: E0114 13:28:02.242336 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.243452 kubelet[2886]: E0114 13:28:02.243042 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.243452 kubelet[2886]: W0114 13:28:02.243281 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.243452 kubelet[2886]: E0114 13:28:02.243293 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.244541 kubelet[2886]: E0114 13:28:02.244427 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.244541 kubelet[2886]: W0114 13:28:02.244502 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.244541 kubelet[2886]: E0114 13:28:02.244512 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.245812 kubelet[2886]: E0114 13:28:02.245715 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.245812 kubelet[2886]: W0114 13:28:02.245786 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.245812 kubelet[2886]: E0114 13:28:02.245796 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.247355 kubelet[2886]: E0114 13:28:02.247297 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.247355 kubelet[2886]: W0114 13:28:02.247314 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.247355 kubelet[2886]: E0114 13:28:02.247325 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.249330 kubelet[2886]: E0114 13:28:02.248944 2886 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:28:02.249330 kubelet[2886]: W0114 13:28:02.248965 2886 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:28:02.249330 kubelet[2886]: E0114 13:28:02.248978 2886 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:28:02.267253 containerd[1649]: time="2026-01-14T13:28:02.266797204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:02.270815 containerd[1649]: time="2026-01-14T13:28:02.270408002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:02.273965 containerd[1649]: time="2026-01-14T13:28:02.273834851Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:02.277750 containerd[1649]: time="2026-01-14T13:28:02.277631972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:02.278310 containerd[1649]: time="2026-01-14T13:28:02.278009284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.076003298s" Jan 14 13:28:02.278310 containerd[1649]: time="2026-01-14T13:28:02.278038989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 13:28:02.289793 containerd[1649]: time="2026-01-14T13:28:02.289594220Z" level=info msg="CreateContainer within sandbox \"1be9bf10005bd36a6d7aa32931e00ab0c810336bcc9e26bdf598f99a2fe8a547\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 13:28:02.310533 containerd[1649]: time="2026-01-14T13:28:02.309955769Z" level=info msg="Container e97eca3211ce2207c08abd86705dfc5fc057ac9931c5aa81b9aa77ab33b7623d: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:28:02.344508 containerd[1649]: time="2026-01-14T13:28:02.343951765Z" level=info msg="CreateContainer within sandbox \"1be9bf10005bd36a6d7aa32931e00ab0c810336bcc9e26bdf598f99a2fe8a547\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e97eca3211ce2207c08abd86705dfc5fc057ac9931c5aa81b9aa77ab33b7623d\"" Jan 14 13:28:02.346979 containerd[1649]: time="2026-01-14T13:28:02.346659105Z" level=info msg="StartContainer for \"e97eca3211ce2207c08abd86705dfc5fc057ac9931c5aa81b9aa77ab33b7623d\"" Jan 14 13:28:02.350527 containerd[1649]: time="2026-01-14T13:28:02.349635942Z" level=info msg="connecting to shim e97eca3211ce2207c08abd86705dfc5fc057ac9931c5aa81b9aa77ab33b7623d" address="unix:///run/containerd/s/f334733878ea59d9a0602a8ed04f104c05be73862c9b4807aea801bb38a692f3" protocol=ttrpc version=3 Jan 14 13:28:02.431504 systemd[1]: Started cri-containerd-e97eca3211ce2207c08abd86705dfc5fc057ac9931c5aa81b9aa77ab33b7623d.scope - libcontainer container e97eca3211ce2207c08abd86705dfc5fc057ac9931c5aa81b9aa77ab33b7623d. Jan 14 13:28:02.530000 audit: BPF prog-id=169 op=LOAD Jan 14 13:28:02.530000 audit[3534]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3394 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:02.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376563613332313163653232303763303861626438363730356466 Jan 14 13:28:02.530000 audit: BPF prog-id=170 op=LOAD Jan 14 13:28:02.530000 audit[3534]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3394 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:02.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376563613332313163653232303763303861626438363730356466 Jan 14 13:28:02.530000 audit: BPF prog-id=170 op=UNLOAD Jan 14 13:28:02.530000 audit[3534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:02.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376563613332313163653232303763303861626438363730356466 Jan 14 13:28:02.530000 audit: BPF prog-id=169 op=UNLOAD Jan 14 13:28:02.530000 audit[3534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:02.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376563613332313163653232303763303861626438363730356466 Jan 14 13:28:02.530000 audit: BPF prog-id=171 op=LOAD Jan 14 13:28:02.530000 audit[3534]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3394 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:02.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539376563613332313163653232303763303861626438363730356466 Jan 14 13:28:02.597328 containerd[1649]: time="2026-01-14T13:28:02.592390041Z" level=info msg="StartContainer for \"e97eca3211ce2207c08abd86705dfc5fc057ac9931c5aa81b9aa77ab33b7623d\" returns successfully" Jan 14 13:28:02.613660 systemd[1]: cri-containerd-e97eca3211ce2207c08abd86705dfc5fc057ac9931c5aa81b9aa77ab33b7623d.scope: Deactivated successfully. Jan 14 13:28:02.619000 audit: BPF prog-id=171 op=UNLOAD Jan 14 13:28:02.626828 containerd[1649]: time="2026-01-14T13:28:02.626575445Z" level=info msg="received container exit event container_id:\"e97eca3211ce2207c08abd86705dfc5fc057ac9931c5aa81b9aa77ab33b7623d\" id:\"e97eca3211ce2207c08abd86705dfc5fc057ac9931c5aa81b9aa77ab33b7623d\" pid:3547 exited_at:{seconds:1768397282 nanos:625475507}" Jan 14 13:28:02.711004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e97eca3211ce2207c08abd86705dfc5fc057ac9931c5aa81b9aa77ab33b7623d-rootfs.mount: Deactivated successfully. Jan 14 13:28:02.872550 kubelet[2886]: E0114 13:28:02.871684 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:28:03.088537 kubelet[2886]: I0114 13:28:03.088366 2886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 13:28:03.090494 kubelet[2886]: E0114 13:28:03.088633 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:03.090494 kubelet[2886]: E0114 13:28:03.089036 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:03.094839 containerd[1649]: time="2026-01-14T13:28:03.093807741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 13:28:03.128430 kubelet[2886]: I0114 13:28:03.127779 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-789f445fbc-j7w85" podStartSLOduration=3.665197137 podStartE2EDuration="7.127756674s" podCreationTimestamp="2026-01-14 13:27:56 +0000 UTC" firstStartedPulling="2026-01-14 13:27:57.738072196 +0000 UTC m=+24.175693603" lastFinishedPulling="2026-01-14 13:28:01.200631733 +0000 UTC m=+27.638253140" observedRunningTime="2026-01-14 13:28:02.125060154 +0000 UTC m=+28.562681591" watchObservedRunningTime="2026-01-14 13:28:03.127756674 +0000 UTC m=+29.565378111" Jan 14 13:28:04.872677 kubelet[2886]: E0114 13:28:04.872486 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:28:06.871247 kubelet[2886]: E0114 13:28:06.871024 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:28:07.159741 containerd[1649]: time="2026-01-14T13:28:07.159317673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:07.161467 containerd[1649]: time="2026-01-14T13:28:07.161242760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 13:28:07.164537 containerd[1649]: time="2026-01-14T13:28:07.164498151Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:07.171750 containerd[1649]: time="2026-01-14T13:28:07.171579900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:07.172746 containerd[1649]: time="2026-01-14T13:28:07.172718382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.078258957s" Jan 14 13:28:07.173251 containerd[1649]: time="2026-01-14T13:28:07.172933391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 13:28:07.183568 containerd[1649]: time="2026-01-14T13:28:07.183444462Z" level=info msg="CreateContainer within sandbox \"1be9bf10005bd36a6d7aa32931e00ab0c810336bcc9e26bdf598f99a2fe8a547\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 13:28:07.205892 containerd[1649]: time="2026-01-14T13:28:07.205654856Z" level=info msg="Container 86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:28:07.229998 containerd[1649]: time="2026-01-14T13:28:07.229791072Z" level=info msg="CreateContainer within sandbox \"1be9bf10005bd36a6d7aa32931e00ab0c810336bcc9e26bdf598f99a2fe8a547\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7\"" Jan 14 13:28:07.232606 containerd[1649]: time="2026-01-14T13:28:07.232062037Z" level=info msg="StartContainer for \"86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7\"" Jan 14 13:28:07.234666 containerd[1649]: time="2026-01-14T13:28:07.234637530Z" level=info msg="connecting to shim 86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7" address="unix:///run/containerd/s/f334733878ea59d9a0602a8ed04f104c05be73862c9b4807aea801bb38a692f3" protocol=ttrpc version=3 Jan 14 13:28:07.278468 systemd[1]: Started cri-containerd-86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7.scope - libcontainer container 86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7. Jan 14 13:28:07.387000 audit: BPF prog-id=172 op=LOAD Jan 14 13:28:07.394587 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 14 13:28:07.394732 kernel: audit: type=1334 audit(1768397287.387:563): prog-id=172 op=LOAD Jan 14 13:28:07.387000 audit[3592]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3394 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:07.435759 kernel: audit: type=1300 audit(1768397287.387:563): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3394 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:07.438961 kernel: audit: type=1327 audit(1768397287.387:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836653630363164656264316162363833346534623161376265373235 Jan 14 13:28:07.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836653630363164656264316162363833346534623161376265373235 Jan 14 13:28:07.387000 audit: BPF prog-id=173 op=LOAD Jan 14 13:28:07.479768 kernel: audit: type=1334 audit(1768397287.387:564): prog-id=173 op=LOAD Jan 14 13:28:07.480405 kernel: audit: type=1300 audit(1768397287.387:564): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3394 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:07.387000 audit[3592]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3394 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:07.515780 kernel: audit: type=1327 audit(1768397287.387:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836653630363164656264316162363833346534623161376265373235 Jan 14 13:28:07.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836653630363164656264316162363833346534623161376265373235 Jan 14 13:28:07.387000 audit: BPF prog-id=173 op=UNLOAD Jan 14 13:28:07.559398 kernel: audit: type=1334 audit(1768397287.387:565): prog-id=173 op=UNLOAD Jan 14 13:28:07.559749 kernel: audit: type=1300 audit(1768397287.387:565): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:07.387000 audit[3592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:07.581171 containerd[1649]: time="2026-01-14T13:28:07.580293361Z" level=info msg="StartContainer for \"86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7\" returns successfully" Jan 14 13:28:07.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836653630363164656264316162363833346534623161376265373235 Jan 14 13:28:07.628421 kernel: audit: type=1327 audit(1768397287.387:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836653630363164656264316162363833346534623161376265373235 Jan 14 13:28:07.387000 audit: BPF prog-id=172 op=UNLOAD Jan 14 13:28:07.643708 kernel: audit: type=1334 audit(1768397287.387:566): prog-id=172 op=UNLOAD Jan 14 13:28:07.387000 audit[3592]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:07.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836653630363164656264316162363833346534623161376265373235 Jan 14 13:28:07.387000 audit: BPF prog-id=174 op=LOAD Jan 14 13:28:07.387000 audit[3592]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3394 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:07.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836653630363164656264316162363833346534623161376265373235 Jan 14 13:28:08.136493 kubelet[2886]: E0114 13:28:08.136458 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:08.643349 systemd[1]: cri-containerd-86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7.scope: Deactivated successfully. Jan 14 13:28:08.643935 systemd[1]: cri-containerd-86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7.scope: Consumed 1.166s CPU time, 169.5M memory peak, 4.9M read from disk, 171.3M written to disk. Jan 14 13:28:08.648000 audit: BPF prog-id=174 op=UNLOAD Jan 14 13:28:08.655572 containerd[1649]: time="2026-01-14T13:28:08.655420295Z" level=info msg="received container exit event container_id:\"86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7\" id:\"86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7\" pid:3606 exited_at:{seconds:1768397288 nanos:655022467}" Jan 14 13:28:08.718978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86e6061debd1ab6834e4b1a7be725e4abe117c6fd8f18aa39ad26e64830343d7-rootfs.mount: Deactivated successfully. Jan 14 13:28:08.725665 kubelet[2886]: I0114 13:28:08.725544 2886 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 13:28:08.844440 systemd[1]: Created slice kubepods-burstable-podd4e5f128_84cf_45f6_bd4e_05162a204a27.slice - libcontainer container kubepods-burstable-podd4e5f128_84cf_45f6_bd4e_05162a204a27.slice. Jan 14 13:28:08.880302 systemd[1]: Created slice kubepods-besteffort-podc6e0811d_d1ed_43ea_9ccd_2eda32c8d3f9.slice - libcontainer container kubepods-besteffort-podc6e0811d_d1ed_43ea_9ccd_2eda32c8d3f9.slice. Jan 14 13:28:08.896069 systemd[1]: Created slice kubepods-besteffort-podfc822dd2_4a0b_4df8_969d_8ce5598b7069.slice - libcontainer container kubepods-besteffort-podfc822dd2_4a0b_4df8_969d_8ce5598b7069.slice. Jan 14 13:28:08.912002 kubelet[2886]: I0114 13:28:08.911962 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/43c81015-17c1-4886-ba54-03a8237f3050-calico-apiserver-certs\") pod \"calico-apiserver-68b6f8f57b-4vsgx\" (UID: \"43c81015-17c1-4886-ba54-03a8237f3050\") " pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" Jan 14 13:28:08.913888 kubelet[2886]: I0114 13:28:08.913858 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4e5f128-84cf-45f6-bd4e-05162a204a27-config-volume\") pod \"coredns-674b8bbfcf-d7kbj\" (UID: \"d4e5f128-84cf-45f6-bd4e-05162a204a27\") " pod="kube-system/coredns-674b8bbfcf-d7kbj" Jan 14 13:28:08.914893 kubelet[2886]: I0114 13:28:08.914875 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wtj2\" (UniqueName: \"kubernetes.io/projected/d4e5f128-84cf-45f6-bd4e-05162a204a27-kube-api-access-4wtj2\") pod \"coredns-674b8bbfcf-d7kbj\" (UID: \"d4e5f128-84cf-45f6-bd4e-05162a204a27\") " pod="kube-system/coredns-674b8bbfcf-d7kbj" Jan 14 13:28:08.915971 systemd[1]: Created slice kubepods-besteffort-pod43c81015_17c1_4886_ba54_03a8237f3050.slice - libcontainer container kubepods-besteffort-pod43c81015_17c1_4886_ba54_03a8237f3050.slice. Jan 14 13:28:08.917894 kubelet[2886]: I0114 13:28:08.917875 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fc822dd2-4a0b-4df8-969d-8ce5598b7069-calico-apiserver-certs\") pod \"calico-apiserver-68b6f8f57b-kb2gl\" (UID: \"fc822dd2-4a0b-4df8-969d-8ce5598b7069\") " pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" Jan 14 13:28:08.919876 kubelet[2886]: I0114 13:28:08.918343 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-whisker-ca-bundle\") pod \"whisker-778857594d-hq6jm\" (UID: \"c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9\") " pod="calico-system/whisker-778857594d-hq6jm" Jan 14 13:28:08.919876 kubelet[2886]: I0114 13:28:08.918378 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq6nl\" (UniqueName: \"kubernetes.io/projected/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-kube-api-access-kq6nl\") pod \"whisker-778857594d-hq6jm\" (UID: \"c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9\") " pod="calico-system/whisker-778857594d-hq6jm" Jan 14 13:28:08.919876 kubelet[2886]: I0114 13:28:08.918402 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbd5l\" (UniqueName: \"kubernetes.io/projected/fc822dd2-4a0b-4df8-969d-8ce5598b7069-kube-api-access-mbd5l\") pod \"calico-apiserver-68b6f8f57b-kb2gl\" (UID: \"fc822dd2-4a0b-4df8-969d-8ce5598b7069\") " pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" Jan 14 13:28:08.919876 kubelet[2886]: I0114 13:28:08.918437 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btjwh\" (UniqueName: \"kubernetes.io/projected/43c81015-17c1-4886-ba54-03a8237f3050-kube-api-access-btjwh\") pod \"calico-apiserver-68b6f8f57b-4vsgx\" (UID: \"43c81015-17c1-4886-ba54-03a8237f3050\") " pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" Jan 14 13:28:08.919876 kubelet[2886]: I0114 13:28:08.918473 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-whisker-backend-key-pair\") pod \"whisker-778857594d-hq6jm\" (UID: \"c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9\") " pod="calico-system/whisker-778857594d-hq6jm" Jan 14 13:28:08.930893 systemd[1]: Created slice kubepods-burstable-pod3335a4b7_b5c6_401a_8883_2638b6db1a9d.slice - libcontainer container kubepods-burstable-pod3335a4b7_b5c6_401a_8883_2638b6db1a9d.slice. Jan 14 13:28:08.952725 systemd[1]: Created slice kubepods-besteffort-pod1356d1d1_69e1_470e_955d_5a3a9ab090a6.slice - libcontainer container kubepods-besteffort-pod1356d1d1_69e1_470e_955d_5a3a9ab090a6.slice. Jan 14 13:28:08.970588 systemd[1]: Created slice kubepods-besteffort-pod8200b33d_eb45_4c93_98d1_0c3029a31280.slice - libcontainer container kubepods-besteffort-pod8200b33d_eb45_4c93_98d1_0c3029a31280.slice. Jan 14 13:28:08.987321 containerd[1649]: time="2026-01-14T13:28:08.985978649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckktd,Uid:8200b33d-eb45-4c93-98d1-0c3029a31280,Namespace:calico-system,Attempt:0,}" Jan 14 13:28:08.987038 systemd[1]: Created slice kubepods-besteffort-pod97139d64_ebd5_495e_81ad_3f4aa4c54bfd.slice - libcontainer container kubepods-besteffort-pod97139d64_ebd5_495e_81ad_3f4aa4c54bfd.slice. Jan 14 13:28:09.020648 kubelet[2886]: I0114 13:28:09.020521 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97139d64-ebd5-495e-81ad-3f4aa4c54bfd-config\") pod \"goldmane-666569f655-h2gf2\" (UID: \"97139d64-ebd5-495e-81ad-3f4aa4c54bfd\") " pod="calico-system/goldmane-666569f655-h2gf2" Jan 14 13:28:09.020770 kubelet[2886]: I0114 13:28:09.020720 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97139d64-ebd5-495e-81ad-3f4aa4c54bfd-goldmane-ca-bundle\") pod \"goldmane-666569f655-h2gf2\" (UID: \"97139d64-ebd5-495e-81ad-3f4aa4c54bfd\") " pod="calico-system/goldmane-666569f655-h2gf2" Jan 14 13:28:09.020770 kubelet[2886]: I0114 13:28:09.020747 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/97139d64-ebd5-495e-81ad-3f4aa4c54bfd-goldmane-key-pair\") pod \"goldmane-666569f655-h2gf2\" (UID: \"97139d64-ebd5-495e-81ad-3f4aa4c54bfd\") " pod="calico-system/goldmane-666569f655-h2gf2" Jan 14 13:28:09.020770 kubelet[2886]: I0114 13:28:09.020768 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mgp8\" (UniqueName: \"kubernetes.io/projected/3335a4b7-b5c6-401a-8883-2638b6db1a9d-kube-api-access-7mgp8\") pod \"coredns-674b8bbfcf-gvhhh\" (UID: \"3335a4b7-b5c6-401a-8883-2638b6db1a9d\") " pod="kube-system/coredns-674b8bbfcf-gvhhh" Jan 14 13:28:09.020995 kubelet[2886]: I0114 13:28:09.020903 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjvdc\" (UniqueName: \"kubernetes.io/projected/1356d1d1-69e1-470e-955d-5a3a9ab090a6-kube-api-access-vjvdc\") pod \"calico-kube-controllers-c96748b8f-wwf76\" (UID: \"1356d1d1-69e1-470e-955d-5a3a9ab090a6\") " pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" Jan 14 13:28:09.020995 kubelet[2886]: I0114 13:28:09.020945 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzzxv\" (UniqueName: \"kubernetes.io/projected/97139d64-ebd5-495e-81ad-3f4aa4c54bfd-kube-api-access-jzzxv\") pod \"goldmane-666569f655-h2gf2\" (UID: \"97139d64-ebd5-495e-81ad-3f4aa4c54bfd\") " pod="calico-system/goldmane-666569f655-h2gf2" Jan 14 13:28:09.020995 kubelet[2886]: I0114 13:28:09.020973 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3335a4b7-b5c6-401a-8883-2638b6db1a9d-config-volume\") pod \"coredns-674b8bbfcf-gvhhh\" (UID: \"3335a4b7-b5c6-401a-8883-2638b6db1a9d\") " pod="kube-system/coredns-674b8bbfcf-gvhhh" Jan 14 13:28:09.020995 kubelet[2886]: I0114 13:28:09.020993 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1356d1d1-69e1-470e-955d-5a3a9ab090a6-tigera-ca-bundle\") pod \"calico-kube-controllers-c96748b8f-wwf76\" (UID: \"1356d1d1-69e1-470e-955d-5a3a9ab090a6\") " pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" Jan 14 13:28:09.164616 kubelet[2886]: E0114 13:28:09.164300 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:09.166026 kubelet[2886]: E0114 13:28:09.165556 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:09.166937 containerd[1649]: time="2026-01-14T13:28:09.166901600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d7kbj,Uid:d4e5f128-84cf-45f6-bd4e-05162a204a27,Namespace:kube-system,Attempt:0,}" Jan 14 13:28:09.190639 containerd[1649]: time="2026-01-14T13:28:09.189949215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778857594d-hq6jm,Uid:c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9,Namespace:calico-system,Attempt:0,}" Jan 14 13:28:09.191536 containerd[1649]: time="2026-01-14T13:28:09.191210956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 13:28:09.221223 containerd[1649]: time="2026-01-14T13:28:09.220672752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6f8f57b-kb2gl,Uid:fc822dd2-4a0b-4df8-969d-8ce5598b7069,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:28:09.223624 containerd[1649]: time="2026-01-14T13:28:09.223589200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6f8f57b-4vsgx,Uid:43c81015-17c1-4886-ba54-03a8237f3050,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:28:09.243351 kubelet[2886]: E0114 13:28:09.242886 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:09.243749 containerd[1649]: time="2026-01-14T13:28:09.243723461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvhhh,Uid:3335a4b7-b5c6-401a-8883-2638b6db1a9d,Namespace:kube-system,Attempt:0,}" Jan 14 13:28:09.272017 containerd[1649]: time="2026-01-14T13:28:09.271486050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c96748b8f-wwf76,Uid:1356d1d1-69e1-470e-955d-5a3a9ab090a6,Namespace:calico-system,Attempt:0,}" Jan 14 13:28:09.302570 containerd[1649]: time="2026-01-14T13:28:09.302375044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h2gf2,Uid:97139d64-ebd5-495e-81ad-3f4aa4c54bfd,Namespace:calico-system,Attempt:0,}" Jan 14 13:28:09.644714 containerd[1649]: time="2026-01-14T13:28:09.644669878Z" level=error msg="Failed to destroy network for sandbox \"fc2be3e08282701ad4e998a70d63858f878e2ef124dec896c73622472ebab269\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.675337 containerd[1649]: time="2026-01-14T13:28:09.674626867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d7kbj,Uid:d4e5f128-84cf-45f6-bd4e-05162a204a27,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2be3e08282701ad4e998a70d63858f878e2ef124dec896c73622472ebab269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.677398 kubelet[2886]: E0114 13:28:09.677035 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2be3e08282701ad4e998a70d63858f878e2ef124dec896c73622472ebab269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.677490 kubelet[2886]: E0114 13:28:09.677448 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2be3e08282701ad4e998a70d63858f878e2ef124dec896c73622472ebab269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d7kbj" Jan 14 13:28:09.677490 kubelet[2886]: E0114 13:28:09.677477 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc2be3e08282701ad4e998a70d63858f878e2ef124dec896c73622472ebab269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d7kbj" Jan 14 13:28:09.678050 kubelet[2886]: E0114 13:28:09.677961 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-d7kbj_kube-system(d4e5f128-84cf-45f6-bd4e-05162a204a27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-d7kbj_kube-system(d4e5f128-84cf-45f6-bd4e-05162a204a27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc2be3e08282701ad4e998a70d63858f878e2ef124dec896c73622472ebab269\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-d7kbj" podUID="d4e5f128-84cf-45f6-bd4e-05162a204a27" Jan 14 13:28:09.736393 containerd[1649]: time="2026-01-14T13:28:09.736327859Z" level=error msg="Failed to destroy network for sandbox \"ad1cecc8e37f9e7d9d0f45c5f30139bd11ca8c500d541bf915cca6ecb630b805\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.750291 systemd[1]: run-netns-cni\x2d893a6bfc\x2d16c2\x2d176e\x2d0a13\x2d7709f7536261.mount: Deactivated successfully. Jan 14 13:28:09.799066 containerd[1649]: time="2026-01-14T13:28:09.798500156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckktd,Uid:8200b33d-eb45-4c93-98d1-0c3029a31280,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad1cecc8e37f9e7d9d0f45c5f30139bd11ca8c500d541bf915cca6ecb630b805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.805265 kubelet[2886]: E0114 13:28:09.803003 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad1cecc8e37f9e7d9d0f45c5f30139bd11ca8c500d541bf915cca6ecb630b805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.805265 kubelet[2886]: E0114 13:28:09.803245 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad1cecc8e37f9e7d9d0f45c5f30139bd11ca8c500d541bf915cca6ecb630b805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ckktd" Jan 14 13:28:09.805265 kubelet[2886]: E0114 13:28:09.803281 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad1cecc8e37f9e7d9d0f45c5f30139bd11ca8c500d541bf915cca6ecb630b805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ckktd" Jan 14 13:28:09.806319 kubelet[2886]: E0114 13:28:09.803345 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad1cecc8e37f9e7d9d0f45c5f30139bd11ca8c500d541bf915cca6ecb630b805\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:28:09.819422 containerd[1649]: time="2026-01-14T13:28:09.816066951Z" level=error msg="Failed to destroy network for sandbox \"dfd9e4420b317fd014c2bcea5094edd098e854a8cd44056cdffb7abfbc4ae924\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.822383 systemd[1]: run-netns-cni\x2d995943ae\x2db5fd\x2db805\x2d1c15\x2dc4acd7cbfcb1.mount: Deactivated successfully. Jan 14 13:28:09.834701 containerd[1649]: time="2026-01-14T13:28:09.834506166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778857594d-hq6jm,Uid:c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd9e4420b317fd014c2bcea5094edd098e854a8cd44056cdffb7abfbc4ae924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.835574 kubelet[2886]: E0114 13:28:09.834925 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd9e4420b317fd014c2bcea5094edd098e854a8cd44056cdffb7abfbc4ae924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.835574 kubelet[2886]: E0114 13:28:09.834991 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd9e4420b317fd014c2bcea5094edd098e854a8cd44056cdffb7abfbc4ae924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778857594d-hq6jm" Jan 14 13:28:09.835574 kubelet[2886]: E0114 13:28:09.835018 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfd9e4420b317fd014c2bcea5094edd098e854a8cd44056cdffb7abfbc4ae924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778857594d-hq6jm" Jan 14 13:28:09.844250 kubelet[2886]: E0114 13:28:09.835386 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-778857594d-hq6jm_calico-system(c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-778857594d-hq6jm_calico-system(c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfd9e4420b317fd014c2bcea5094edd098e854a8cd44056cdffb7abfbc4ae924\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-778857594d-hq6jm" podUID="c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9" Jan 14 13:28:09.841887 systemd[1]: run-netns-cni\x2dceb97b3c\x2df66d\x2d0566\x2d59fb\x2d424e73f345bc.mount: Deactivated successfully. Jan 14 13:28:09.844564 containerd[1649]: time="2026-01-14T13:28:09.837224260Z" level=error msg="Failed to destroy network for sandbox \"1b0c7f040b874f78c61df7e48ddde534e4668f96c61d9bec77ac02a235ee9b14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.859035 containerd[1649]: time="2026-01-14T13:28:09.858561360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvhhh,Uid:3335a4b7-b5c6-401a-8883-2638b6db1a9d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0c7f040b874f78c61df7e48ddde534e4668f96c61d9bec77ac02a235ee9b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.860013 kubelet[2886]: E0114 13:28:09.859552 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0c7f040b874f78c61df7e48ddde534e4668f96c61d9bec77ac02a235ee9b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.860013 kubelet[2886]: E0114 13:28:09.859607 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0c7f040b874f78c61df7e48ddde534e4668f96c61d9bec77ac02a235ee9b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gvhhh" Jan 14 13:28:09.860013 kubelet[2886]: E0114 13:28:09.859637 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0c7f040b874f78c61df7e48ddde534e4668f96c61d9bec77ac02a235ee9b14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gvhhh" Jan 14 13:28:09.860945 kubelet[2886]: E0114 13:28:09.859695 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gvhhh_kube-system(3335a4b7-b5c6-401a-8883-2638b6db1a9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gvhhh_kube-system(3335a4b7-b5c6-401a-8883-2638b6db1a9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b0c7f040b874f78c61df7e48ddde534e4668f96c61d9bec77ac02a235ee9b14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gvhhh" podUID="3335a4b7-b5c6-401a-8883-2638b6db1a9d" Jan 14 13:28:09.915486 containerd[1649]: time="2026-01-14T13:28:09.913566733Z" level=error msg="Failed to destroy network for sandbox \"8d0078b70a566006711e40c226b962dffcb299ffeafa0d210c2ef9cc4cf07ab4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.919439 systemd[1]: run-netns-cni\x2d35bbe878\x2dee33\x2de0d8\x2d142f\x2d2df07c091506.mount: Deactivated successfully. Jan 14 13:28:09.930071 containerd[1649]: time="2026-01-14T13:28:09.930039265Z" level=error msg="Failed to destroy network for sandbox \"e1ea7f84bba0f9713a9dc152225edf47cc711e0387ec9a0b3d056b03dc8b5e39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.940878 containerd[1649]: time="2026-01-14T13:28:09.938746213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c96748b8f-wwf76,Uid:1356d1d1-69e1-470e-955d-5a3a9ab090a6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0078b70a566006711e40c226b962dffcb299ffeafa0d210c2ef9cc4cf07ab4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.940878 containerd[1649]: time="2026-01-14T13:28:09.940452915Z" level=error msg="Failed to destroy network for sandbox \"cf3c52d0422e0feb1cff9b248e27ff56e23f77bb88e269e203bf17a8b2d1a4d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.941554 kubelet[2886]: E0114 13:28:09.939406 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0078b70a566006711e40c226b962dffcb299ffeafa0d210c2ef9cc4cf07ab4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.941554 kubelet[2886]: E0114 13:28:09.939464 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0078b70a566006711e40c226b962dffcb299ffeafa0d210c2ef9cc4cf07ab4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" Jan 14 13:28:09.941554 kubelet[2886]: E0114 13:28:09.939495 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0078b70a566006711e40c226b962dffcb299ffeafa0d210c2ef9cc4cf07ab4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" Jan 14 13:28:09.941884 kubelet[2886]: E0114 13:28:09.939556 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c96748b8f-wwf76_calico-system(1356d1d1-69e1-470e-955d-5a3a9ab090a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c96748b8f-wwf76_calico-system(1356d1d1-69e1-470e-955d-5a3a9ab090a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d0078b70a566006711e40c226b962dffcb299ffeafa0d210c2ef9cc4cf07ab4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:28:09.953674 containerd[1649]: time="2026-01-14T13:28:09.953285507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6f8f57b-kb2gl,Uid:fc822dd2-4a0b-4df8-969d-8ce5598b7069,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ea7f84bba0f9713a9dc152225edf47cc711e0387ec9a0b3d056b03dc8b5e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.955421 kubelet[2886]: E0114 13:28:09.955029 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ea7f84bba0f9713a9dc152225edf47cc711e0387ec9a0b3d056b03dc8b5e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.955937 kubelet[2886]: E0114 13:28:09.955568 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ea7f84bba0f9713a9dc152225edf47cc711e0387ec9a0b3d056b03dc8b5e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" Jan 14 13:28:09.956511 kubelet[2886]: E0114 13:28:09.956032 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ea7f84bba0f9713a9dc152225edf47cc711e0387ec9a0b3d056b03dc8b5e39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" Jan 14 13:28:09.957954 kubelet[2886]: E0114 13:28:09.957510 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68b6f8f57b-kb2gl_calico-apiserver(fc822dd2-4a0b-4df8-969d-8ce5598b7069)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68b6f8f57b-kb2gl_calico-apiserver(fc822dd2-4a0b-4df8-969d-8ce5598b7069)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1ea7f84bba0f9713a9dc152225edf47cc711e0387ec9a0b3d056b03dc8b5e39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:28:09.960736 containerd[1649]: time="2026-01-14T13:28:09.959876593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6f8f57b-4vsgx,Uid:43c81015-17c1-4886-ba54-03a8237f3050,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf3c52d0422e0feb1cff9b248e27ff56e23f77bb88e269e203bf17a8b2d1a4d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.961242 kubelet[2886]: E0114 13:28:09.961055 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf3c52d0422e0feb1cff9b248e27ff56e23f77bb88e269e203bf17a8b2d1a4d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.961366 kubelet[2886]: E0114 13:28:09.961344 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf3c52d0422e0feb1cff9b248e27ff56e23f77bb88e269e203bf17a8b2d1a4d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" Jan 14 13:28:09.961459 kubelet[2886]: E0114 13:28:09.961435 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf3c52d0422e0feb1cff9b248e27ff56e23f77bb88e269e203bf17a8b2d1a4d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" Jan 14 13:28:09.961692 kubelet[2886]: E0114 13:28:09.961660 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68b6f8f57b-4vsgx_calico-apiserver(43c81015-17c1-4886-ba54-03a8237f3050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68b6f8f57b-4vsgx_calico-apiserver(43c81015-17c1-4886-ba54-03a8237f3050)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf3c52d0422e0feb1cff9b248e27ff56e23f77bb88e269e203bf17a8b2d1a4d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:28:09.970002 containerd[1649]: time="2026-01-14T13:28:09.969916238Z" level=error msg="Failed to destroy network for sandbox \"835c510b54340cd1a8fc2b8e0d76e0dcba0fb00d0013ba471a888599f93e4cd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.982417 containerd[1649]: time="2026-01-14T13:28:09.982352740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h2gf2,Uid:97139d64-ebd5-495e-81ad-3f4aa4c54bfd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"835c510b54340cd1a8fc2b8e0d76e0dcba0fb00d0013ba471a888599f93e4cd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.984325 kubelet[2886]: E0114 13:28:09.983329 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"835c510b54340cd1a8fc2b8e0d76e0dcba0fb00d0013ba471a888599f93e4cd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:09.984325 kubelet[2886]: E0114 13:28:09.983400 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"835c510b54340cd1a8fc2b8e0d76e0dcba0fb00d0013ba471a888599f93e4cd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-h2gf2" Jan 14 13:28:09.984325 kubelet[2886]: E0114 13:28:09.983424 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"835c510b54340cd1a8fc2b8e0d76e0dcba0fb00d0013ba471a888599f93e4cd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-h2gf2" Jan 14 13:28:09.984650 kubelet[2886]: E0114 13:28:09.984579 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-h2gf2_calico-system(97139d64-ebd5-495e-81ad-3f4aa4c54bfd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-h2gf2_calico-system(97139d64-ebd5-495e-81ad-3f4aa4c54bfd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"835c510b54340cd1a8fc2b8e0d76e0dcba0fb00d0013ba471a888599f93e4cd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:28:10.722302 systemd[1]: run-netns-cni\x2dc0cc9a52\x2d1f96\x2db76a\x2d6abe\x2d5cebeeb3d54b.mount: Deactivated successfully. Jan 14 13:28:10.722541 systemd[1]: run-netns-cni\x2d48eb9b3b\x2db195\x2de0fd\x2d0d92\x2dbd4a5a4ffd9e.mount: Deactivated successfully. Jan 14 13:28:10.722645 systemd[1]: run-netns-cni\x2dab135882\x2d1a44\x2d4788\x2d780a\x2d1f474bef6005.mount: Deactivated successfully. Jan 14 13:28:19.990651 kubelet[2886]: I0114 13:28:19.988893 2886 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 13:28:19.993801 kubelet[2886]: E0114 13:28:19.993588 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:20.137500 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 13:28:20.137646 kernel: audit: type=1325 audit(1768397300.127:569): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:20.127000 audit[3912]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:20.127000 audit[3912]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc2f6f2e10 a2=0 a3=7ffc2f6f2dfc items=0 ppid=3003 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:20.172802 kernel: audit: type=1300 audit(1768397300.127:569): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc2f6f2e10 a2=0 a3=7ffc2f6f2dfc items=0 ppid=3003 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:20.127000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:20.185654 kernel: audit: type=1327 audit(1768397300.127:569): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:20.186000 audit[3912]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:20.186000 audit[3912]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc2f6f2e10 a2=0 a3=7ffc2f6f2dfc items=0 ppid=3003 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:20.244035 kernel: audit: type=1325 audit(1768397300.186:570): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:20.244227 kernel: audit: type=1300 audit(1768397300.186:570): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc2f6f2e10 a2=0 a3=7ffc2f6f2dfc items=0 ppid=3003 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:20.186000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:20.264252 kernel: audit: type=1327 audit(1768397300.186:570): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:20.270429 kubelet[2886]: E0114 13:28:20.270062 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:20.874291 containerd[1649]: time="2026-01-14T13:28:20.874016895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckktd,Uid:8200b33d-eb45-4c93-98d1-0c3029a31280,Namespace:calico-system,Attempt:0,}" Jan 14 13:28:20.874906 containerd[1649]: time="2026-01-14T13:28:20.874030791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778857594d-hq6jm,Uid:c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9,Namespace:calico-system,Attempt:0,}" Jan 14 13:28:21.221358 containerd[1649]: time="2026-01-14T13:28:21.221266197Z" level=error msg="Failed to destroy network for sandbox \"6a0739acb4b2c2c3b2d401b390261fd31a0939d77c2f37ca9aaa5c26057dc24f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:21.226913 systemd[1]: run-netns-cni\x2de54a042e\x2d05e1\x2de093\x2dbb04\x2d05f11eaf8d87.mount: Deactivated successfully. Jan 14 13:28:21.229338 containerd[1649]: time="2026-01-14T13:28:21.228345016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckktd,Uid:8200b33d-eb45-4c93-98d1-0c3029a31280,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a0739acb4b2c2c3b2d401b390261fd31a0939d77c2f37ca9aaa5c26057dc24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:21.230581 kubelet[2886]: E0114 13:28:21.230055 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a0739acb4b2c2c3b2d401b390261fd31a0939d77c2f37ca9aaa5c26057dc24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:21.230581 kubelet[2886]: E0114 13:28:21.230295 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a0739acb4b2c2c3b2d401b390261fd31a0939d77c2f37ca9aaa5c26057dc24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ckktd" Jan 14 13:28:21.230581 kubelet[2886]: E0114 13:28:21.230323 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a0739acb4b2c2c3b2d401b390261fd31a0939d77c2f37ca9aaa5c26057dc24f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ckktd" Jan 14 13:28:21.231412 kubelet[2886]: E0114 13:28:21.230380 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a0739acb4b2c2c3b2d401b390261fd31a0939d77c2f37ca9aaa5c26057dc24f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:28:21.251501 containerd[1649]: time="2026-01-14T13:28:21.251351065Z" level=error msg="Failed to destroy network for sandbox \"49b462ea785dc719f3cf37ea3ae8fef24ac04689212e06230895eb06e436bbd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:21.254888 systemd[1]: run-netns-cni\x2d4b776cc4\x2d572d\x2d7d95\x2d9f63\x2d589e0d707a0f.mount: Deactivated successfully. Jan 14 13:28:21.261421 containerd[1649]: time="2026-01-14T13:28:21.261315612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-778857594d-hq6jm,Uid:c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b462ea785dc719f3cf37ea3ae8fef24ac04689212e06230895eb06e436bbd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:21.264547 kubelet[2886]: E0114 13:28:21.263843 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b462ea785dc719f3cf37ea3ae8fef24ac04689212e06230895eb06e436bbd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:21.264547 kubelet[2886]: E0114 13:28:21.263900 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b462ea785dc719f3cf37ea3ae8fef24ac04689212e06230895eb06e436bbd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778857594d-hq6jm" Jan 14 13:28:21.264547 kubelet[2886]: E0114 13:28:21.263923 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49b462ea785dc719f3cf37ea3ae8fef24ac04689212e06230895eb06e436bbd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-778857594d-hq6jm" Jan 14 13:28:21.264658 kubelet[2886]: E0114 13:28:21.264549 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-778857594d-hq6jm_calico-system(c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-778857594d-hq6jm_calico-system(c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49b462ea785dc719f3cf37ea3ae8fef24ac04689212e06230895eb06e436bbd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-778857594d-hq6jm" podUID="c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9" Jan 14 13:28:21.274045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2741979096.mount: Deactivated successfully. Jan 14 13:28:21.318345 containerd[1649]: time="2026-01-14T13:28:21.318069952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:21.320961 containerd[1649]: time="2026-01-14T13:28:21.320836564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 13:28:21.323552 containerd[1649]: time="2026-01-14T13:28:21.323363180Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:21.327798 containerd[1649]: time="2026-01-14T13:28:21.327561605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:28:21.328583 containerd[1649]: time="2026-01-14T13:28:21.328379884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.13705658s" Jan 14 13:28:21.328583 containerd[1649]: time="2026-01-14T13:28:21.328475311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 13:28:21.359514 containerd[1649]: time="2026-01-14T13:28:21.359040139Z" level=info msg="CreateContainer within sandbox \"1be9bf10005bd36a6d7aa32931e00ab0c810336bcc9e26bdf598f99a2fe8a547\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 13:28:21.424502 containerd[1649]: time="2026-01-14T13:28:21.424321252Z" level=info msg="Container 9c314e8b326d16679d631da62268a8745773edaee57ab9f8f4cbe6f83a3ab7a6: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:28:21.458835 containerd[1649]: time="2026-01-14T13:28:21.458559810Z" level=info msg="CreateContainer within sandbox \"1be9bf10005bd36a6d7aa32931e00ab0c810336bcc9e26bdf598f99a2fe8a547\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9c314e8b326d16679d631da62268a8745773edaee57ab9f8f4cbe6f83a3ab7a6\"" Jan 14 13:28:21.460644 containerd[1649]: time="2026-01-14T13:28:21.460518132Z" level=info msg="StartContainer for \"9c314e8b326d16679d631da62268a8745773edaee57ab9f8f4cbe6f83a3ab7a6\"" Jan 14 13:28:21.463883 containerd[1649]: time="2026-01-14T13:28:21.463623521Z" level=info msg="connecting to shim 9c314e8b326d16679d631da62268a8745773edaee57ab9f8f4cbe6f83a3ab7a6" address="unix:///run/containerd/s/f334733878ea59d9a0602a8ed04f104c05be73862c9b4807aea801bb38a692f3" protocol=ttrpc version=3 Jan 14 13:28:21.676862 systemd[1]: Started cri-containerd-9c314e8b326d16679d631da62268a8745773edaee57ab9f8f4cbe6f83a3ab7a6.scope - libcontainer container 9c314e8b326d16679d631da62268a8745773edaee57ab9f8f4cbe6f83a3ab7a6. Jan 14 13:28:21.827000 audit: BPF prog-id=175 op=LOAD Jan 14 13:28:21.836221 kernel: audit: type=1334 audit(1768397301.827:571): prog-id=175 op=LOAD Jan 14 13:28:21.827000 audit[3979]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3394 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:21.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963333134653862333236643136363739643633316461363232363861 Jan 14 13:28:21.879418 kubelet[2886]: E0114 13:28:21.874614 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:21.880265 containerd[1649]: time="2026-01-14T13:28:21.880020206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6f8f57b-kb2gl,Uid:fc822dd2-4a0b-4df8-969d-8ce5598b7069,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:28:21.880999 containerd[1649]: time="2026-01-14T13:28:21.880051687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d7kbj,Uid:d4e5f128-84cf-45f6-bd4e-05162a204a27,Namespace:kube-system,Attempt:0,}" Jan 14 13:28:21.898851 kernel: audit: type=1300 audit(1768397301.827:571): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3394 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:21.898961 kernel: audit: type=1327 audit(1768397301.827:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963333134653862333236643136363739643633316461363232363861 Jan 14 13:28:21.831000 audit: BPF prog-id=176 op=LOAD Jan 14 13:28:21.908583 kernel: audit: type=1334 audit(1768397301.831:572): prog-id=176 op=LOAD Jan 14 13:28:21.831000 audit[3979]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3394 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:21.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963333134653862333236643136363739643633316461363232363861 Jan 14 13:28:21.831000 audit: BPF prog-id=176 op=UNLOAD Jan 14 13:28:21.831000 audit[3979]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:21.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963333134653862333236643136363739643633316461363232363861 Jan 14 13:28:21.831000 audit: BPF prog-id=175 op=UNLOAD Jan 14 13:28:21.831000 audit[3979]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3394 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:21.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963333134653862333236643136363739643633316461363232363861 Jan 14 13:28:21.831000 audit: BPF prog-id=177 op=LOAD Jan 14 13:28:21.831000 audit[3979]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3394 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:21.831000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963333134653862333236643136363739643633316461363232363861 Jan 14 13:28:21.998061 containerd[1649]: time="2026-01-14T13:28:21.997878736Z" level=info msg="StartContainer for \"9c314e8b326d16679d631da62268a8745773edaee57ab9f8f4cbe6f83a3ab7a6\" returns successfully" Jan 14 13:28:22.160794 containerd[1649]: time="2026-01-14T13:28:22.160345624Z" level=error msg="Failed to destroy network for sandbox \"76d3c58f0d8c70fcf0378bf006f1caa6ae99a6eb0cc359688336cde883341520\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:22.166050 systemd[1]: run-netns-cni\x2dfcac88e4\x2d2aa4\x2d46c0\x2d3457\x2db2f1b064e0c9.mount: Deactivated successfully. Jan 14 13:28:22.167942 containerd[1649]: time="2026-01-14T13:28:22.166816949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6f8f57b-kb2gl,Uid:fc822dd2-4a0b-4df8-969d-8ce5598b7069,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d3c58f0d8c70fcf0378bf006f1caa6ae99a6eb0cc359688336cde883341520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:22.173326 kubelet[2886]: E0114 13:28:22.172988 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d3c58f0d8c70fcf0378bf006f1caa6ae99a6eb0cc359688336cde883341520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:22.173326 kubelet[2886]: E0114 13:28:22.173042 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d3c58f0d8c70fcf0378bf006f1caa6ae99a6eb0cc359688336cde883341520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" Jan 14 13:28:22.173326 kubelet[2886]: E0114 13:28:22.173064 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d3c58f0d8c70fcf0378bf006f1caa6ae99a6eb0cc359688336cde883341520\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" Jan 14 13:28:22.176236 kubelet[2886]: E0114 13:28:22.173265 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68b6f8f57b-kb2gl_calico-apiserver(fc822dd2-4a0b-4df8-969d-8ce5598b7069)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68b6f8f57b-kb2gl_calico-apiserver(fc822dd2-4a0b-4df8-969d-8ce5598b7069)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76d3c58f0d8c70fcf0378bf006f1caa6ae99a6eb0cc359688336cde883341520\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:28:22.255806 containerd[1649]: time="2026-01-14T13:28:22.255507903Z" level=error msg="Failed to destroy network for sandbox \"751ea3cb5ec723b600e954c130b7027b93a6c5c34a14a5ee4092321e1124e396\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:22.264552 containerd[1649]: time="2026-01-14T13:28:22.264501896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d7kbj,Uid:d4e5f128-84cf-45f6-bd4e-05162a204a27,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"751ea3cb5ec723b600e954c130b7027b93a6c5c34a14a5ee4092321e1124e396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:22.268535 kubelet[2886]: E0114 13:28:22.264994 2886 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751ea3cb5ec723b600e954c130b7027b93a6c5c34a14a5ee4092321e1124e396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:28:22.268535 kubelet[2886]: E0114 13:28:22.265064 2886 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751ea3cb5ec723b600e954c130b7027b93a6c5c34a14a5ee4092321e1124e396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d7kbj" Jan 14 13:28:22.268535 kubelet[2886]: E0114 13:28:22.265529 2886 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751ea3cb5ec723b600e954c130b7027b93a6c5c34a14a5ee4092321e1124e396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d7kbj" Jan 14 13:28:22.265517 systemd[1]: run-netns-cni\x2d582cb7c4\x2d17a6\x2dece1\x2d5a27\x2d937038112cfc.mount: Deactivated successfully. Jan 14 13:28:22.270464 kubelet[2886]: E0114 13:28:22.265575 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-d7kbj_kube-system(d4e5f128-84cf-45f6-bd4e-05162a204a27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-d7kbj_kube-system(d4e5f128-84cf-45f6-bd4e-05162a204a27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"751ea3cb5ec723b600e954c130b7027b93a6c5c34a14a5ee4092321e1124e396\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-d7kbj" podUID="d4e5f128-84cf-45f6-bd4e-05162a204a27" Jan 14 13:28:22.300945 kubelet[2886]: E0114 13:28:22.300550 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:22.340968 kubelet[2886]: I0114 13:28:22.340838 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-phls5" podStartSLOduration=2.8100804029999997 podStartE2EDuration="26.340823357s" podCreationTimestamp="2026-01-14 13:27:56 +0000 UTC" firstStartedPulling="2026-01-14 13:27:57.798719819 +0000 UTC m=+24.236341216" lastFinishedPulling="2026-01-14 13:28:21.329462763 +0000 UTC m=+47.767084170" observedRunningTime="2026-01-14 13:28:22.338053036 +0000 UTC m=+48.775674444" watchObservedRunningTime="2026-01-14 13:28:22.340823357 +0000 UTC m=+48.778444763" Jan 14 13:28:22.469826 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 13:28:22.469911 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 13:28:22.809538 kubelet[2886]: I0114 13:28:22.808931 2886 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-whisker-backend-key-pair\") pod \"c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9\" (UID: \"c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9\") " Jan 14 13:28:22.809538 kubelet[2886]: I0114 13:28:22.809243 2886 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-whisker-ca-bundle\") pod \"c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9\" (UID: \"c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9\") " Jan 14 13:28:22.809538 kubelet[2886]: I0114 13:28:22.809273 2886 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq6nl\" (UniqueName: \"kubernetes.io/projected/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-kube-api-access-kq6nl\") pod \"c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9\" (UID: \"c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9\") " Jan 14 13:28:22.812954 kubelet[2886]: I0114 13:28:22.812632 2886 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9" (UID: "c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 13:28:22.830326 systemd[1]: var-lib-kubelet-pods-c6e0811d\x2dd1ed\x2d43ea\x2d9ccd\x2d2eda32c8d3f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkq6nl.mount: Deactivated successfully. Jan 14 13:28:22.833328 kubelet[2886]: I0114 13:28:22.832821 2886 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9" (UID: "c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 13:28:22.837851 systemd[1]: var-lib-kubelet-pods-c6e0811d\x2dd1ed\x2d43ea\x2d9ccd\x2d2eda32c8d3f9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 13:28:22.841518 kubelet[2886]: I0114 13:28:22.838609 2886 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-kube-api-access-kq6nl" (OuterVolumeSpecName: "kube-api-access-kq6nl") pod "c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9" (UID: "c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9"). InnerVolumeSpecName "kube-api-access-kq6nl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 13:28:22.910549 kubelet[2886]: I0114 13:28:22.910518 2886 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 14 13:28:22.910690 kubelet[2886]: I0114 13:28:22.910679 2886 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 14 13:28:22.911317 kubelet[2886]: I0114 13:28:22.911300 2886 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kq6nl\" (UniqueName: \"kubernetes.io/projected/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9-kube-api-access-kq6nl\") on node \"localhost\" DevicePath \"\"" Jan 14 13:28:23.304865 kubelet[2886]: E0114 13:28:23.304509 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:23.324570 systemd[1]: Removed slice kubepods-besteffort-podc6e0811d_d1ed_43ea_9ccd_2eda32c8d3f9.slice - libcontainer container kubepods-besteffort-podc6e0811d_d1ed_43ea_9ccd_2eda32c8d3f9.slice. Jan 14 13:28:23.471922 systemd[1]: Created slice kubepods-besteffort-pod2d3c1365_6a1f_45b8_8652_2b261d46979e.slice - libcontainer container kubepods-besteffort-pod2d3c1365_6a1f_45b8_8652_2b261d46979e.slice. Jan 14 13:28:23.620887 kubelet[2886]: I0114 13:28:23.620561 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d3c1365-6a1f-45b8-8652-2b261d46979e-whisker-ca-bundle\") pod \"whisker-76d688f66-n8bg2\" (UID: \"2d3c1365-6a1f-45b8-8652-2b261d46979e\") " pod="calico-system/whisker-76d688f66-n8bg2" Jan 14 13:28:23.620887 kubelet[2886]: I0114 13:28:23.620833 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8j4n\" (UniqueName: \"kubernetes.io/projected/2d3c1365-6a1f-45b8-8652-2b261d46979e-kube-api-access-j8j4n\") pod \"whisker-76d688f66-n8bg2\" (UID: \"2d3c1365-6a1f-45b8-8652-2b261d46979e\") " pod="calico-system/whisker-76d688f66-n8bg2" Jan 14 13:28:23.620887 kubelet[2886]: I0114 13:28:23.620869 2886 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2d3c1365-6a1f-45b8-8652-2b261d46979e-whisker-backend-key-pair\") pod \"whisker-76d688f66-n8bg2\" (UID: \"2d3c1365-6a1f-45b8-8652-2b261d46979e\") " pod="calico-system/whisker-76d688f66-n8bg2" Jan 14 13:28:23.779900 containerd[1649]: time="2026-01-14T13:28:23.779323797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76d688f66-n8bg2,Uid:2d3c1365-6a1f-45b8-8652-2b261d46979e,Namespace:calico-system,Attempt:0,}" Jan 14 13:28:23.884796 kubelet[2886]: I0114 13:28:23.884054 2886 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9" path="/var/lib/kubelet/pods/c6e0811d-d1ed-43ea-9ccd-2eda32c8d3f9/volumes" Jan 14 13:28:24.332602 kubelet[2886]: E0114 13:28:24.332418 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:24.407848 systemd-networkd[1422]: cali02a67e68c8b: Link UP Jan 14 13:28:24.415559 systemd-networkd[1422]: cali02a67e68c8b: Gained carrier Jan 14 13:28:24.503905 containerd[1649]: 2026-01-14 13:28:23.854 [INFO][4162] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:28:24.503905 containerd[1649]: 2026-01-14 13:28:23.920 [INFO][4162] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76d688f66--n8bg2-eth0 whisker-76d688f66- calico-system 2d3c1365-6a1f-45b8-8652-2b261d46979e 944 0 2026-01-14 13:28:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76d688f66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76d688f66-n8bg2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali02a67e68c8b [] [] }} ContainerID="3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" Namespace="calico-system" Pod="whisker-76d688f66-n8bg2" WorkloadEndpoint="localhost-k8s-whisker--76d688f66--n8bg2-" Jan 14 13:28:24.503905 containerd[1649]: 2026-01-14 13:28:23.920 [INFO][4162] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" Namespace="calico-system" Pod="whisker-76d688f66-n8bg2" WorkloadEndpoint="localhost-k8s-whisker--76d688f66--n8bg2-eth0" Jan 14 13:28:24.503905 containerd[1649]: 2026-01-14 13:28:24.154 [INFO][4176] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" HandleID="k8s-pod-network.3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" Workload="localhost-k8s-whisker--76d688f66--n8bg2-eth0" Jan 14 13:28:24.504470 containerd[1649]: 2026-01-14 13:28:24.157 [INFO][4176] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" HandleID="k8s-pod-network.3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" Workload="localhost-k8s-whisker--76d688f66--n8bg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e320), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76d688f66-n8bg2", "timestamp":"2026-01-14 13:28:24.154963682 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:28:24.504470 containerd[1649]: 2026-01-14 13:28:24.157 [INFO][4176] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:28:24.504470 containerd[1649]: 2026-01-14 13:28:24.158 [INFO][4176] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:28:24.504470 containerd[1649]: 2026-01-14 13:28:24.159 [INFO][4176] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:28:24.504470 containerd[1649]: 2026-01-14 13:28:24.185 [INFO][4176] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" host="localhost" Jan 14 13:28:24.504470 containerd[1649]: 2026-01-14 13:28:24.205 [INFO][4176] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:28:24.504470 containerd[1649]: 2026-01-14 13:28:24.220 [INFO][4176] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:28:24.504470 containerd[1649]: 2026-01-14 13:28:24.224 [INFO][4176] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:24.504470 containerd[1649]: 2026-01-14 13:28:24.236 [INFO][4176] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:24.504470 containerd[1649]: 2026-01-14 13:28:24.236 [INFO][4176] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" host="localhost" Jan 14 13:28:24.506437 containerd[1649]: 2026-01-14 13:28:24.245 [INFO][4176] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29 Jan 14 13:28:24.506437 containerd[1649]: 2026-01-14 13:28:24.258 [INFO][4176] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" host="localhost" Jan 14 13:28:24.506437 containerd[1649]: 2026-01-14 13:28:24.278 [INFO][4176] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" host="localhost" Jan 14 13:28:24.506437 containerd[1649]: 2026-01-14 13:28:24.278 [INFO][4176] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" host="localhost" Jan 14 13:28:24.506437 containerd[1649]: 2026-01-14 13:28:24.278 [INFO][4176] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:28:24.506437 containerd[1649]: 2026-01-14 13:28:24.278 [INFO][4176] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" HandleID="k8s-pod-network.3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" Workload="localhost-k8s-whisker--76d688f66--n8bg2-eth0" Jan 14 13:28:24.506562 containerd[1649]: 2026-01-14 13:28:24.289 [INFO][4162] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" Namespace="calico-system" Pod="whisker-76d688f66-n8bg2" WorkloadEndpoint="localhost-k8s-whisker--76d688f66--n8bg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76d688f66--n8bg2-eth0", GenerateName:"whisker-76d688f66-", Namespace:"calico-system", SelfLink:"", UID:"2d3c1365-6a1f-45b8-8652-2b261d46979e", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 28, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76d688f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76d688f66-n8bg2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali02a67e68c8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:24.506562 containerd[1649]: 2026-01-14 13:28:24.289 [INFO][4162] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" Namespace="calico-system" Pod="whisker-76d688f66-n8bg2" WorkloadEndpoint="localhost-k8s-whisker--76d688f66--n8bg2-eth0" Jan 14 13:28:24.506852 containerd[1649]: 2026-01-14 13:28:24.289 [INFO][4162] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02a67e68c8b ContainerID="3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" Namespace="calico-system" Pod="whisker-76d688f66-n8bg2" WorkloadEndpoint="localhost-k8s-whisker--76d688f66--n8bg2-eth0" Jan 14 13:28:24.506852 containerd[1649]: 2026-01-14 13:28:24.421 [INFO][4162] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" Namespace="calico-system" Pod="whisker-76d688f66-n8bg2" WorkloadEndpoint="localhost-k8s-whisker--76d688f66--n8bg2-eth0" Jan 14 13:28:24.506988 containerd[1649]: 2026-01-14 13:28:24.434 [INFO][4162] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" Namespace="calico-system" Pod="whisker-76d688f66-n8bg2" WorkloadEndpoint="localhost-k8s-whisker--76d688f66--n8bg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76d688f66--n8bg2-eth0", GenerateName:"whisker-76d688f66-", Namespace:"calico-system", SelfLink:"", UID:"2d3c1365-6a1f-45b8-8652-2b261d46979e", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 28, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76d688f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29", Pod:"whisker-76d688f66-n8bg2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali02a67e68c8b", MAC:"a2:15:e5:d4:ee:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:24.509329 containerd[1649]: 2026-01-14 13:28:24.471 [INFO][4162] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" Namespace="calico-system" Pod="whisker-76d688f66-n8bg2" WorkloadEndpoint="localhost-k8s-whisker--76d688f66--n8bg2-eth0" Jan 14 13:28:24.879274 kubelet[2886]: E0114 13:28:24.878992 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:24.882038 containerd[1649]: time="2026-01-14T13:28:24.881601494Z" level=info msg="connecting to shim 3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29" address="unix:///run/containerd/s/b68f3aee571107b65de3e74be9546c70c2722f4a09aefcc16472eab6737d871d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:28:24.890346 containerd[1649]: time="2026-01-14T13:28:24.889568818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvhhh,Uid:3335a4b7-b5c6-401a-8883-2638b6db1a9d,Namespace:kube-system,Attempt:0,}" Jan 14 13:28:24.893975 containerd[1649]: time="2026-01-14T13:28:24.893930612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h2gf2,Uid:97139d64-ebd5-495e-81ad-3f4aa4c54bfd,Namespace:calico-system,Attempt:0,}" Jan 14 13:28:24.900804 containerd[1649]: time="2026-01-14T13:28:24.899054868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6f8f57b-4vsgx,Uid:43c81015-17c1-4886-ba54-03a8237f3050,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:28:25.043505 systemd[1]: Started cri-containerd-3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29.scope - libcontainer container 3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29. Jan 14 13:28:25.378047 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 14 13:28:25.378382 kernel: audit: type=1334 audit(1768397305.344:576): prog-id=178 op=LOAD Jan 14 13:28:25.344000 audit: BPF prog-id=178 op=LOAD Jan 14 13:28:25.349000 audit: BPF prog-id=179 op=LOAD Jan 14 13:28:25.396060 kernel: audit: type=1334 audit(1768397305.349:577): prog-id=179 op=LOAD Jan 14 13:28:25.349000 audit[4334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4321 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:25.498947 kernel: audit: type=1300 audit(1768397305.349:577): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4321 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:25.516222 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:28:25.516385 systemd-networkd[1422]: cali02a67e68c8b: Gained IPv6LL Jan 14 13:28:25.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343136303238663839303763623165366464633065386531393835 Jan 14 13:28:25.558663 kernel: audit: type=1327 audit(1768397305.349:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343136303238663839303763623165366464633065386531393835 Jan 14 13:28:25.349000 audit: BPF prog-id=179 op=UNLOAD Jan 14 13:28:25.349000 audit[4334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4321 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:25.615801 kernel: audit: type=1334 audit(1768397305.349:578): prog-id=179 op=UNLOAD Jan 14 13:28:25.615911 kernel: audit: type=1300 audit(1768397305.349:578): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4321 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:25.615950 kernel: audit: type=1327 audit(1768397305.349:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343136303238663839303763623165366464633065386531393835 Jan 14 13:28:25.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343136303238663839303763623165366464633065386531393835 Jan 14 13:28:25.656986 kernel: audit: type=1334 audit(1768397305.354:579): prog-id=180 op=LOAD Jan 14 13:28:25.354000 audit: BPF prog-id=180 op=LOAD Jan 14 13:28:25.354000 audit[4334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4321 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:25.790230 kernel: audit: type=1300 audit(1768397305.354:579): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4321 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:25.790326 kernel: audit: type=1327 audit(1768397305.354:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343136303238663839303763623165366464633065386531393835 Jan 14 13:28:25.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343136303238663839303763623165366464633065386531393835 Jan 14 13:28:25.354000 audit: BPF prog-id=181 op=LOAD Jan 14 13:28:25.354000 audit[4334]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4321 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:25.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343136303238663839303763623165366464633065386531393835 Jan 14 13:28:25.354000 audit: BPF prog-id=181 op=UNLOAD Jan 14 13:28:25.354000 audit[4334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4321 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:25.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343136303238663839303763623165366464633065386531393835 Jan 14 13:28:25.354000 audit: BPF prog-id=180 op=UNLOAD Jan 14 13:28:25.354000 audit[4334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4321 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:25.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343136303238663839303763623165366464633065386531393835 Jan 14 13:28:25.354000 audit: BPF prog-id=182 op=LOAD Jan 14 13:28:25.354000 audit[4334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4321 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:25.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366343136303238663839303763623165366464633065386531393835 Jan 14 13:28:25.913291 containerd[1649]: time="2026-01-14T13:28:25.909574505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c96748b8f-wwf76,Uid:1356d1d1-69e1-470e-955d-5a3a9ab090a6,Namespace:calico-system,Attempt:0,}" Jan 14 13:28:26.192990 containerd[1649]: time="2026-01-14T13:28:26.191957806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76d688f66-n8bg2,Uid:2d3c1365-6a1f-45b8-8652-2b261d46979e,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f416028f8907cb1e6ddc0e8e19857ea82e8d72e9ff6e08fb03ed989cd80db29\"" Jan 14 13:28:26.205069 containerd[1649]: time="2026-01-14T13:28:26.204253318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:28:26.378958 containerd[1649]: time="2026-01-14T13:28:26.378912660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:26.393062 containerd[1649]: time="2026-01-14T13:28:26.393022430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:28:26.396484 kubelet[2886]: E0114 13:28:26.396449 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:28:26.402323 kubelet[2886]: E0114 13:28:26.397601 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:28:26.402405 containerd[1649]: time="2026-01-14T13:28:26.399930269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:26.408634 kubelet[2886]: E0114 13:28:26.399309 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3119e5cb9c374c7884796c10460fa4dc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8j4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d688f66-n8bg2_calico-system(2d3c1365-6a1f-45b8-8652-2b261d46979e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:26.419989 containerd[1649]: time="2026-01-14T13:28:26.419813775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:28:26.828008 containerd[1649]: time="2026-01-14T13:28:26.824515450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:26.836572 containerd[1649]: time="2026-01-14T13:28:26.836537022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:26.852569 containerd[1649]: time="2026-01-14T13:28:26.852452593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:28:26.860341 kubelet[2886]: E0114 13:28:26.854552 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:28:26.860341 kubelet[2886]: E0114 13:28:26.854601 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:28:26.860449 kubelet[2886]: E0114 13:28:26.857291 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8j4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d688f66-n8bg2_calico-system(2d3c1365-6a1f-45b8-8652-2b261d46979e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:26.861390 kubelet[2886]: E0114 13:28:26.860944 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:28:26.974663 systemd-networkd[1422]: cali8eaf4ea54dc: Link UP Jan 14 13:28:26.980671 systemd-networkd[1422]: cali8eaf4ea54dc: Gained carrier Jan 14 13:28:27.100000 audit: BPF prog-id=183 op=LOAD Jan 14 13:28:27.103341 containerd[1649]: 2026-01-14 13:28:25.174 [INFO][4350] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:28:27.103341 containerd[1649]: 2026-01-14 13:28:25.236 [INFO][4350] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--h2gf2-eth0 goldmane-666569f655- calico-system 97139d64-ebd5-495e-81ad-3f4aa4c54bfd 846 0 2026-01-14 13:27:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-h2gf2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8eaf4ea54dc [] [] }} ContainerID="ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" Namespace="calico-system" Pod="goldmane-666569f655-h2gf2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h2gf2-" Jan 14 13:28:27.103341 containerd[1649]: 2026-01-14 13:28:25.239 [INFO][4350] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" Namespace="calico-system" Pod="goldmane-666569f655-h2gf2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h2gf2-eth0" Jan 14 13:28:27.103341 containerd[1649]: 2026-01-14 13:28:26.290 [INFO][4400] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" HandleID="k8s-pod-network.ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" Workload="localhost-k8s-goldmane--666569f655--h2gf2-eth0" Jan 14 13:28:27.104339 containerd[1649]: 2026-01-14 13:28:26.291 [INFO][4400] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" HandleID="k8s-pod-network.ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" Workload="localhost-k8s-goldmane--666569f655--h2gf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000326f00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-h2gf2", "timestamp":"2026-01-14 13:28:26.290288507 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:28:27.104339 containerd[1649]: 2026-01-14 13:28:26.291 [INFO][4400] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:28:27.104339 containerd[1649]: 2026-01-14 13:28:26.291 [INFO][4400] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:28:27.104339 containerd[1649]: 2026-01-14 13:28:26.291 [INFO][4400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:28:27.104339 containerd[1649]: 2026-01-14 13:28:26.358 [INFO][4400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" host="localhost" Jan 14 13:28:27.104339 containerd[1649]: 2026-01-14 13:28:26.432 [INFO][4400] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:28:27.104339 containerd[1649]: 2026-01-14 13:28:26.710 [INFO][4400] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:28:27.104339 containerd[1649]: 2026-01-14 13:28:26.758 [INFO][4400] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:27.104339 containerd[1649]: 2026-01-14 13:28:26.781 [INFO][4400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:27.104339 containerd[1649]: 2026-01-14 13:28:26.782 [INFO][4400] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" host="localhost" Jan 14 13:28:27.111813 containerd[1649]: 2026-01-14 13:28:26.810 [INFO][4400] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964 Jan 14 13:28:27.111813 containerd[1649]: 2026-01-14 13:28:26.852 [INFO][4400] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" host="localhost" Jan 14 13:28:27.111813 containerd[1649]: 2026-01-14 13:28:26.922 [INFO][4400] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" host="localhost" Jan 14 13:28:27.111813 containerd[1649]: 2026-01-14 13:28:26.922 [INFO][4400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" host="localhost" Jan 14 13:28:27.111813 containerd[1649]: 2026-01-14 13:28:26.936 [INFO][4400] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:28:27.111813 containerd[1649]: 2026-01-14 13:28:26.936 [INFO][4400] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" HandleID="k8s-pod-network.ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" Workload="localhost-k8s-goldmane--666569f655--h2gf2-eth0" Jan 14 13:28:27.112008 containerd[1649]: 2026-01-14 13:28:26.961 [INFO][4350] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" Namespace="calico-system" Pod="goldmane-666569f655-h2gf2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h2gf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--h2gf2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"97139d64-ebd5-495e-81ad-3f4aa4c54bfd", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-h2gf2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8eaf4ea54dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:27.112008 containerd[1649]: 2026-01-14 13:28:26.961 [INFO][4350] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" Namespace="calico-system" Pod="goldmane-666569f655-h2gf2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h2gf2-eth0" Jan 14 13:28:27.112446 containerd[1649]: 2026-01-14 13:28:26.961 [INFO][4350] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8eaf4ea54dc ContainerID="ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" Namespace="calico-system" Pod="goldmane-666569f655-h2gf2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h2gf2-eth0" Jan 14 13:28:27.112446 containerd[1649]: 2026-01-14 13:28:26.979 [INFO][4350] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" Namespace="calico-system" Pod="goldmane-666569f655-h2gf2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h2gf2-eth0" Jan 14 13:28:27.112541 containerd[1649]: 2026-01-14 13:28:26.982 [INFO][4350] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" Namespace="calico-system" Pod="goldmane-666569f655-h2gf2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h2gf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--h2gf2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"97139d64-ebd5-495e-81ad-3f4aa4c54bfd", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964", Pod:"goldmane-666569f655-h2gf2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8eaf4ea54dc", MAC:"46:27:09:8b:95:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:27.114665 containerd[1649]: 2026-01-14 13:28:27.066 [INFO][4350] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" Namespace="calico-system" Pod="goldmane-666569f655-h2gf2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--h2gf2-eth0" Jan 14 13:28:27.100000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8c9c9eb0 a2=98 a3=1fffffffffffffff items=0 ppid=4207 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.100000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:28:27.122000 audit: BPF prog-id=183 op=UNLOAD Jan 14 13:28:27.122000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd8c9c9e80 a3=0 items=0 ppid=4207 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.122000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:28:27.131000 audit: BPF prog-id=184 op=LOAD Jan 14 13:28:27.131000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8c9c9d90 a2=94 a3=3 items=0 ppid=4207 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.131000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:28:27.132000 audit: BPF prog-id=184 op=UNLOAD Jan 14 13:28:27.132000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd8c9c9d90 a2=94 a3=3 items=0 ppid=4207 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.132000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:28:27.132000 audit: BPF prog-id=185 op=LOAD Jan 14 13:28:27.132000 audit[4466]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8c9c9dd0 a2=94 a3=7ffd8c9c9fb0 items=0 ppid=4207 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.132000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:28:27.132000 audit: BPF prog-id=185 op=UNLOAD Jan 14 13:28:27.132000 audit[4466]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd8c9c9dd0 a2=94 a3=7ffd8c9c9fb0 items=0 ppid=4207 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.132000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:28:27.248000 audit: BPF prog-id=186 op=LOAD Jan 14 13:28:27.248000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe88fa61f0 a2=98 a3=3 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.248000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:27.248000 audit: BPF prog-id=186 op=UNLOAD Jan 14 13:28:27.248000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe88fa61c0 a3=0 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.248000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:27.251000 audit: BPF prog-id=187 op=LOAD Jan 14 13:28:27.251000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe88fa5fe0 a2=94 a3=54428f items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.251000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:27.251000 audit: BPF prog-id=187 op=UNLOAD Jan 14 13:28:27.251000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe88fa5fe0 a2=94 a3=54428f items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.251000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:27.251000 audit: BPF prog-id=188 op=LOAD Jan 14 13:28:27.251000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe88fa6010 a2=94 a3=2 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.251000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:27.251000 audit: BPF prog-id=188 op=UNLOAD Jan 14 13:28:27.251000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe88fa6010 a2=0 a3=2 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.251000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:27.372502 containerd[1649]: time="2026-01-14T13:28:27.366471043Z" level=info msg="connecting to shim ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964" address="unix:///run/containerd/s/19ea4da2cb87b1155e7f8792edf471d6d2ef718286726ddac60af2d9cf8355ed" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:28:27.369594 systemd-networkd[1422]: calic949a940afc: Link UP Jan 14 13:28:27.377468 systemd-networkd[1422]: calic949a940afc: Gained carrier Jan 14 13:28:27.469485 kubelet[2886]: E0114 13:28:27.468965 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:28:27.511945 containerd[1649]: 2026-01-14 13:28:25.468 [INFO][4366] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:28:27.511945 containerd[1649]: 2026-01-14 13:28:25.608 [INFO][4366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0 coredns-674b8bbfcf- kube-system 3335a4b7-b5c6-401a-8883-2638b6db1a9d 848 0 2026-01-14 13:27:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-gvhhh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic949a940afc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvhhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvhhh-" Jan 14 13:28:27.511945 containerd[1649]: 2026-01-14 13:28:25.608 [INFO][4366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvhhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0" Jan 14 13:28:27.511945 containerd[1649]: 2026-01-14 13:28:26.302 [INFO][4420] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" HandleID="k8s-pod-network.8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" Workload="localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0" Jan 14 13:28:27.513354 containerd[1649]: 2026-01-14 13:28:26.324 [INFO][4420] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" HandleID="k8s-pod-network.8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" Workload="localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037c150), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-gvhhh", "timestamp":"2026-01-14 13:28:26.302618617 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:28:27.513354 containerd[1649]: 2026-01-14 13:28:26.324 [INFO][4420] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:28:27.513354 containerd[1649]: 2026-01-14 13:28:26.936 [INFO][4420] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:28:27.513354 containerd[1649]: 2026-01-14 13:28:26.936 [INFO][4420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:28:27.513354 containerd[1649]: 2026-01-14 13:28:27.014 [INFO][4420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" host="localhost" Jan 14 13:28:27.513354 containerd[1649]: 2026-01-14 13:28:27.090 [INFO][4420] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:28:27.513354 containerd[1649]: 2026-01-14 13:28:27.136 [INFO][4420] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:28:27.513354 containerd[1649]: 2026-01-14 13:28:27.150 [INFO][4420] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:27.513354 containerd[1649]: 2026-01-14 13:28:27.173 [INFO][4420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:27.513354 containerd[1649]: 2026-01-14 13:28:27.174 [INFO][4420] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" host="localhost" Jan 14 13:28:27.513860 containerd[1649]: 2026-01-14 13:28:27.181 [INFO][4420] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144 Jan 14 13:28:27.513860 containerd[1649]: 2026-01-14 13:28:27.212 [INFO][4420] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" host="localhost" Jan 14 13:28:27.513860 containerd[1649]: 2026-01-14 13:28:27.265 [INFO][4420] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" host="localhost" Jan 14 13:28:27.513860 containerd[1649]: 2026-01-14 13:28:27.268 [INFO][4420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" host="localhost" Jan 14 13:28:27.513860 containerd[1649]: 2026-01-14 13:28:27.268 [INFO][4420] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:28:27.513860 containerd[1649]: 2026-01-14 13:28:27.271 [INFO][4420] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" HandleID="k8s-pod-network.8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" Workload="localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0" Jan 14 13:28:27.514021 containerd[1649]: 2026-01-14 13:28:27.321 [INFO][4366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvhhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3335a4b7-b5c6-401a-8883-2638b6db1a9d", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-gvhhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic949a940afc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:27.514904 containerd[1649]: 2026-01-14 13:28:27.321 [INFO][4366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvhhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0" Jan 14 13:28:27.514904 containerd[1649]: 2026-01-14 13:28:27.321 [INFO][4366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic949a940afc ContainerID="8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvhhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0" Jan 14 13:28:27.514904 containerd[1649]: 2026-01-14 13:28:27.390 [INFO][4366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvhhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0" Jan 14 13:28:27.515042 containerd[1649]: 2026-01-14 13:28:27.400 [INFO][4366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvhhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3335a4b7-b5c6-401a-8883-2638b6db1a9d", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144", Pod:"coredns-674b8bbfcf-gvhhh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic949a940afc", MAC:"06:32:b1:fc:38:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:27.515042 containerd[1649]: 2026-01-14 13:28:27.480 [INFO][4366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvhhh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gvhhh-eth0" Jan 14 13:28:27.653000 audit[4510]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4510 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:27.653000 audit[4510]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc2ffd9410 a2=0 a3=7ffc2ffd93fc items=0 ppid=3003 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.653000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:27.684248 systemd-networkd[1422]: cali23a085aa060: Link UP Jan 14 13:28:27.684598 systemd-networkd[1422]: cali23a085aa060: Gained carrier Jan 14 13:28:27.698000 audit[4510]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4510 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:27.698000 audit[4510]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc2ffd9410 a2=0 a3=0 items=0 ppid=3003 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:27.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:27.757420 containerd[1649]: time="2026-01-14T13:28:27.757375431Z" level=info msg="connecting to shim 8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144" address="unix:///run/containerd/s/5b3552f12b315e9237d9d6e9c2a268322b9d9403b6f300d6fb3c14cde53e6456" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:25.271 [INFO][4345] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:25.346 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0 calico-apiserver-68b6f8f57b- calico-apiserver 43c81015-17c1-4886-ba54-03a8237f3050 847 0 2026-01-14 13:27:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68b6f8f57b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68b6f8f57b-4vsgx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali23a085aa060 [] [] }} ContainerID="29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-4vsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:25.347 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-4vsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:26.333 [INFO][4409] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" HandleID="k8s-pod-network.29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" Workload="localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:26.353 [INFO][4409] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" HandleID="k8s-pod-network.29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" Workload="localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039e050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68b6f8f57b-4vsgx", "timestamp":"2026-01-14 13:28:26.333485902 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:26.353 [INFO][4409] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.269 [INFO][4409] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.269 [INFO][4409] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.326 [INFO][4409] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" host="localhost" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.378 [INFO][4409] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.422 [INFO][4409] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.473 [INFO][4409] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.518 [INFO][4409] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.518 [INFO][4409] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" host="localhost" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.546 [INFO][4409] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.621 [INFO][4409] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" host="localhost" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.662 [INFO][4409] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" host="localhost" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.665 [INFO][4409] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" host="localhost" Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.665 [INFO][4409] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:28:27.815020 containerd[1649]: 2026-01-14 13:28:27.665 [INFO][4409] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" HandleID="k8s-pod-network.29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" Workload="localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0" Jan 14 13:28:27.816489 containerd[1649]: 2026-01-14 13:28:27.675 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-4vsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0", GenerateName:"calico-apiserver-68b6f8f57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"43c81015-17c1-4886-ba54-03a8237f3050", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b6f8f57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68b6f8f57b-4vsgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23a085aa060", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:27.816489 containerd[1649]: 2026-01-14 13:28:27.675 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-4vsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0" Jan 14 13:28:27.816489 containerd[1649]: 2026-01-14 13:28:27.675 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23a085aa060 ContainerID="29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-4vsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0" Jan 14 13:28:27.816489 containerd[1649]: 2026-01-14 13:28:27.686 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-4vsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0" Jan 14 13:28:27.816489 containerd[1649]: 2026-01-14 13:28:27.692 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-4vsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0", GenerateName:"calico-apiserver-68b6f8f57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"43c81015-17c1-4886-ba54-03a8237f3050", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b6f8f57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae", Pod:"calico-apiserver-68b6f8f57b-4vsgx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23a085aa060", MAC:"d6:67:f2:b8:b2:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:27.816489 containerd[1649]: 2026-01-14 13:28:27.770 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-4vsgx" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--4vsgx-eth0" Jan 14 13:28:27.880587 systemd[1]: Started cri-containerd-ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964.scope - libcontainer container ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964. Jan 14 13:28:28.125915 systemd[1]: Started cri-containerd-8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144.scope - libcontainer container 8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144. Jan 14 13:28:28.180000 audit: BPF prog-id=189 op=LOAD Jan 14 13:28:28.191437 containerd[1649]: time="2026-01-14T13:28:28.191301250Z" level=info msg="connecting to shim 29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae" address="unix:///run/containerd/s/3b33460d8d7f5db9d4d541d822f0879c832ac90bec76f5f4703ce1d1d4e3966f" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:28:28.199000 audit: BPF prog-id=190 op=LOAD Jan 14 13:28:28.199000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000190238 a2=98 a3=0 items=0 ppid=4484 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393430323036626539336134626534613739366438356163336639 Jan 14 13:28:28.199000 audit: BPF prog-id=190 op=UNLOAD Jan 14 13:28:28.199000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393430323036626539336134626534613739366438356163336639 Jan 14 13:28:28.201000 audit: BPF prog-id=191 op=LOAD Jan 14 13:28:28.201000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000190488 a2=98 a3=0 items=0 ppid=4484 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393430323036626539336134626534613739366438356163336639 Jan 14 13:28:28.201000 audit: BPF prog-id=192 op=LOAD Jan 14 13:28:28.201000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000190218 a2=98 a3=0 items=0 ppid=4484 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393430323036626539336134626534613739366438356163336639 Jan 14 13:28:28.201000 audit: BPF prog-id=192 op=UNLOAD Jan 14 13:28:28.201000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393430323036626539336134626534613739366438356163336639 Jan 14 13:28:28.201000 audit: BPF prog-id=191 op=UNLOAD Jan 14 13:28:28.201000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4484 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393430323036626539336134626534613739366438356163336639 Jan 14 13:28:28.201000 audit: BPF prog-id=193 op=LOAD Jan 14 13:28:28.201000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001906e8 a2=98 a3=0 items=0 ppid=4484 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393430323036626539336134626534613739366438356163336639 Jan 14 13:28:28.259833 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:28:28.281000 audit: BPF prog-id=194 op=LOAD Jan 14 13:28:28.286000 audit: BPF prog-id=195 op=LOAD Jan 14 13:28:28.286000 audit[4540]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646131326430326664316633646339326635656664306265326637 Jan 14 13:28:28.286000 audit: BPF prog-id=195 op=UNLOAD Jan 14 13:28:28.286000 audit[4540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646131326430326664316633646339326635656664306265326637 Jan 14 13:28:28.286000 audit: BPF prog-id=196 op=LOAD Jan 14 13:28:28.286000 audit[4540]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646131326430326664316633646339326635656664306265326637 Jan 14 13:28:28.286000 audit: BPF prog-id=197 op=LOAD Jan 14 13:28:28.286000 audit[4540]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646131326430326664316633646339326635656664306265326637 Jan 14 13:28:28.286000 audit: BPF prog-id=197 op=UNLOAD Jan 14 13:28:28.286000 audit[4540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646131326430326664316633646339326635656664306265326637 Jan 14 13:28:28.286000 audit: BPF prog-id=196 op=UNLOAD Jan 14 13:28:28.286000 audit[4540]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646131326430326664316633646339326635656664306265326637 Jan 14 13:28:28.286000 audit: BPF prog-id=198 op=LOAD Jan 14 13:28:28.286000 audit[4540]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4523 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866646131326430326664316633646339326635656664306265326637 Jan 14 13:28:28.306333 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:28:28.383399 systemd[1]: Started cri-containerd-29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae.scope - libcontainer container 29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae. Jan 14 13:28:28.722000 audit: BPF prog-id=199 op=LOAD Jan 14 13:28:28.724000 audit: BPF prog-id=200 op=LOAD Jan 14 13:28:28.724000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4587 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239623739666639393538653564386165316139376237633762363566 Jan 14 13:28:28.724000 audit: BPF prog-id=200 op=UNLOAD Jan 14 13:28:28.724000 audit[4606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239623739666639393538653564386165316139376237633762363566 Jan 14 13:28:28.724000 audit: BPF prog-id=201 op=LOAD Jan 14 13:28:28.724000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4587 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239623739666639393538653564386165316139376237633762363566 Jan 14 13:28:28.724000 audit: BPF prog-id=202 op=LOAD Jan 14 13:28:28.724000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4587 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239623739666639393538653564386165316139376237633762363566 Jan 14 13:28:28.724000 audit: BPF prog-id=202 op=UNLOAD Jan 14 13:28:28.724000 audit[4606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239623739666639393538653564386165316139376237633762363566 Jan 14 13:28:28.724000 audit: BPF prog-id=201 op=UNLOAD Jan 14 13:28:28.724000 audit[4606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239623739666639393538653564386165316139376237633762363566 Jan 14 13:28:28.724000 audit: BPF prog-id=203 op=LOAD Jan 14 13:28:28.724000 audit[4606]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4587 pid=4606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:28.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239623739666639393538653564386165316139376237633762363566 Jan 14 13:28:28.791194 containerd[1649]: time="2026-01-14T13:28:28.789572978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvhhh,Uid:3335a4b7-b5c6-401a-8883-2638b6db1a9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144\"" Jan 14 13:28:28.806454 kubelet[2886]: E0114 13:28:28.805400 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:28.810529 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:28:28.811593 systemd-networkd[1422]: calic949a940afc: Gained IPv6LL Jan 14 13:28:28.902307 containerd[1649]: time="2026-01-14T13:28:28.902263512Z" level=info msg="CreateContainer within sandbox \"8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:28:29.003427 systemd-networkd[1422]: cali23a085aa060: Gained IPv6LL Jan 14 13:28:29.010865 systemd-networkd[1422]: cali8eaf4ea54dc: Gained IPv6LL Jan 14 13:28:29.137602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879287930.mount: Deactivated successfully. Jan 14 13:28:29.175844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1880168492.mount: Deactivated successfully. Jan 14 13:28:29.212628 containerd[1649]: time="2026-01-14T13:28:29.211517115Z" level=info msg="Container 9425205d2d9287b11d5d592c1ad975c37b18c7eb435330c7bb05a99251cec866: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:28:29.284519 containerd[1649]: time="2026-01-14T13:28:29.280633929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h2gf2,Uid:97139d64-ebd5-495e-81ad-3f4aa4c54bfd,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea940206be93a4be4a796d85ac3f9cc9212c6dfc5315d24cc3f0e9dddbc5e964\"" Jan 14 13:28:29.325499 containerd[1649]: time="2026-01-14T13:28:29.324493160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:28:29.362987 containerd[1649]: time="2026-01-14T13:28:29.354449403Z" level=info msg="CreateContainer within sandbox \"8fda12d02fd1f3dc92f5efd0be2f74dd9d6f6edce34bbebc7916a8e7c7d94144\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9425205d2d9287b11d5d592c1ad975c37b18c7eb435330c7bb05a99251cec866\"" Jan 14 13:28:29.362987 containerd[1649]: time="2026-01-14T13:28:29.362397781Z" level=info msg="StartContainer for \"9425205d2d9287b11d5d592c1ad975c37b18c7eb435330c7bb05a99251cec866\"" Jan 14 13:28:29.421837 containerd[1649]: time="2026-01-14T13:28:29.420352436Z" level=info msg="connecting to shim 9425205d2d9287b11d5d592c1ad975c37b18c7eb435330c7bb05a99251cec866" address="unix:///run/containerd/s/5b3552f12b315e9237d9d6e9c2a268322b9d9403b6f300d6fb3c14cde53e6456" protocol=ttrpc version=3 Jan 14 13:28:29.450000 audit: BPF prog-id=204 op=LOAD Jan 14 13:28:29.450000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe88fa5ed0 a2=94 a3=1 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.450000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.450000 audit: BPF prog-id=204 op=UNLOAD Jan 14 13:28:29.450000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe88fa5ed0 a2=94 a3=1 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.450000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.509446 containerd[1649]: time="2026-01-14T13:28:29.505471271Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:29.537505 containerd[1649]: time="2026-01-14T13:28:29.523995749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:28:29.537505 containerd[1649]: time="2026-01-14T13:28:29.526579430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:29.539495 kubelet[2886]: E0114 13:28:29.539458 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:28:29.539620 kubelet[2886]: E0114 13:28:29.539598 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:28:29.539985 kubelet[2886]: E0114 13:28:29.539924 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzzxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h2gf2_calico-system(97139d64-ebd5-495e-81ad-3f4aa4c54bfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:29.547379 kubelet[2886]: E0114 13:28:29.547348 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:28:29.605000 audit: BPF prog-id=205 op=LOAD Jan 14 13:28:29.605000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe88fa5ec0 a2=94 a3=4 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.605000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.605000 audit: BPF prog-id=205 op=UNLOAD Jan 14 13:28:29.605000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe88fa5ec0 a2=0 a3=4 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.605000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.605000 audit: BPF prog-id=206 op=LOAD Jan 14 13:28:29.605000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe88fa5d20 a2=94 a3=5 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.605000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.605000 audit: BPF prog-id=206 op=UNLOAD Jan 14 13:28:29.605000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe88fa5d20 a2=0 a3=5 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.605000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.605000 audit: BPF prog-id=207 op=LOAD Jan 14 13:28:29.605000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe88fa5f40 a2=94 a3=6 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.605000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.605000 audit: BPF prog-id=207 op=UNLOAD Jan 14 13:28:29.605000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe88fa5f40 a2=0 a3=6 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.605000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.611000 audit: BPF prog-id=208 op=LOAD Jan 14 13:28:29.611000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe88fa56f0 a2=94 a3=88 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.611000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.611000 audit: BPF prog-id=209 op=LOAD Jan 14 13:28:29.611000 audit[4474]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe88fa5570 a2=94 a3=2 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.611000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.611000 audit: BPF prog-id=209 op=UNLOAD Jan 14 13:28:29.611000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe88fa55a0 a2=0 a3=7ffe88fa56a0 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.611000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.619000 audit: BPF prog-id=208 op=UNLOAD Jan 14 13:28:29.619000 audit[4474]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3ca45d10 a2=0 a3=e6a2af83c371d3d0 items=0 ppid=4207 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.619000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:28:29.685983 containerd[1649]: time="2026-01-14T13:28:29.684640958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6f8f57b-4vsgx,Uid:43c81015-17c1-4886-ba54-03a8237f3050,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"29b79ff9958e5d8ae1a97b7c7b65fdb2d011ba8a53f43bbffc7ff00c414438ae\"" Jan 14 13:28:29.702873 containerd[1649]: time="2026-01-14T13:28:29.702847575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:28:29.882000 audit: BPF prog-id=210 op=LOAD Jan 14 13:28:29.882000 audit[4668]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff01b00000 a2=98 a3=1999999999999999 items=0 ppid=4207 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.882000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:28:29.882000 audit: BPF prog-id=210 op=UNLOAD Jan 14 13:28:29.882000 audit[4668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff01afffd0 a3=0 items=0 ppid=4207 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.882000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:28:29.882000 audit: BPF prog-id=211 op=LOAD Jan 14 13:28:29.882000 audit[4668]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff01affee0 a2=94 a3=ffff items=0 ppid=4207 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.882000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:28:29.882000 audit: BPF prog-id=211 op=UNLOAD Jan 14 13:28:29.882000 audit[4668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff01affee0 a2=94 a3=ffff items=0 ppid=4207 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.882000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:28:29.882000 audit: BPF prog-id=212 op=LOAD Jan 14 13:28:29.882000 audit[4668]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff01afff20 a2=94 a3=7fff01b00100 items=0 ppid=4207 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.882000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:28:29.885000 audit: BPF prog-id=212 op=UNLOAD Jan 14 13:28:29.885000 audit[4668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff01afff20 a2=94 a3=7fff01b00100 items=0 ppid=4207 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:29.885000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:28:29.973583 systemd[1]: Started cri-containerd-9425205d2d9287b11d5d592c1ad975c37b18c7eb435330c7bb05a99251cec866.scope - libcontainer container 9425205d2d9287b11d5d592c1ad975c37b18c7eb435330c7bb05a99251cec866. Jan 14 13:28:29.990666 containerd[1649]: time="2026-01-14T13:28:29.989646705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:30.006294 containerd[1649]: time="2026-01-14T13:28:30.004891352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:28:30.006294 containerd[1649]: time="2026-01-14T13:28:30.004994363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:30.006603 kubelet[2886]: E0114 13:28:30.006546 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:28:30.010328 kubelet[2886]: E0114 13:28:30.010295 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:28:30.010583 kubelet[2886]: E0114 13:28:30.010531 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btjwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b6f8f57b-4vsgx_calico-apiserver(43c81015-17c1-4886-ba54-03a8237f3050): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:30.029964 kubelet[2886]: E0114 13:28:30.026373 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:28:30.129344 systemd-networkd[1422]: calif09eb7837d8: Link UP Jan 14 13:28:30.136336 systemd-networkd[1422]: calif09eb7837d8: Gained carrier Jan 14 13:28:30.238000 audit: BPF prog-id=213 op=LOAD Jan 14 13:28:30.244000 audit: BPF prog-id=214 op=LOAD Jan 14 13:28:30.244000 audit[4649]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4523 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:30.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323532303564326439323837623131643564353932633161643937 Jan 14 13:28:30.244000 audit: BPF prog-id=214 op=UNLOAD Jan 14 13:28:30.244000 audit[4649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:30.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323532303564326439323837623131643564353932633161643937 Jan 14 13:28:30.244000 audit: BPF prog-id=215 op=LOAD Jan 14 13:28:30.244000 audit[4649]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4523 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:30.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323532303564326439323837623131643564353932633161643937 Jan 14 13:28:30.244000 audit: BPF prog-id=216 op=LOAD Jan 14 13:28:30.244000 audit[4649]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4523 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:30.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323532303564326439323837623131643564353932633161643937 Jan 14 13:28:30.244000 audit: BPF prog-id=216 op=UNLOAD Jan 14 13:28:30.244000 audit[4649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:30.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323532303564326439323837623131643564353932633161643937 Jan 14 13:28:30.244000 audit: BPF prog-id=215 op=UNLOAD Jan 14 13:28:30.244000 audit[4649]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:30.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323532303564326439323837623131643564353932633161643937 Jan 14 13:28:30.244000 audit: BPF prog-id=217 op=LOAD Jan 14 13:28:30.244000 audit[4649]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4523 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:30.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323532303564326439323837623131643564353932633161643937 Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:28.613 [INFO][4546] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0 calico-kube-controllers-c96748b8f- calico-system 1356d1d1-69e1-470e-955d-5a3a9ab090a6 849 0 2026-01-14 13:27:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c96748b8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c96748b8f-wwf76 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif09eb7837d8 [] [] }} ContainerID="d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" Namespace="calico-system" Pod="calico-kube-controllers-c96748b8f-wwf76" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:28.617 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" Namespace="calico-system" Pod="calico-kube-controllers-c96748b8f-wwf76" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.331 [INFO][4633] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" HandleID="k8s-pod-network.d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" Workload="localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.336 [INFO][4633] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" HandleID="k8s-pod-network.d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" Workload="localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000382050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c96748b8f-wwf76", "timestamp":"2026-01-14 13:28:29.331645458 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.336 [INFO][4633] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.336 [INFO][4633] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.336 [INFO][4633] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.586 [INFO][4633] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" host="localhost" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.665 [INFO][4633] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.700 [INFO][4633] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.731 [INFO][4633] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.769 [INFO][4633] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.769 [INFO][4633] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" host="localhost" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.780 [INFO][4633] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411 Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.819 [INFO][4633] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" host="localhost" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.981 [INFO][4633] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" host="localhost" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.981 [INFO][4633] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" host="localhost" Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.981 [INFO][4633] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:28:30.260388 containerd[1649]: 2026-01-14 13:28:29.981 [INFO][4633] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" HandleID="k8s-pod-network.d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" Workload="localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0" Jan 14 13:28:30.263506 containerd[1649]: 2026-01-14 13:28:30.029 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" Namespace="calico-system" Pod="calico-kube-controllers-c96748b8f-wwf76" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0", GenerateName:"calico-kube-controllers-c96748b8f-", Namespace:"calico-system", SelfLink:"", UID:"1356d1d1-69e1-470e-955d-5a3a9ab090a6", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c96748b8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c96748b8f-wwf76", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif09eb7837d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:30.263506 containerd[1649]: 2026-01-14 13:28:30.029 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" Namespace="calico-system" Pod="calico-kube-controllers-c96748b8f-wwf76" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0" Jan 14 13:28:30.263506 containerd[1649]: 2026-01-14 13:28:30.029 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif09eb7837d8 ContainerID="d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" Namespace="calico-system" Pod="calico-kube-controllers-c96748b8f-wwf76" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0" Jan 14 13:28:30.263506 containerd[1649]: 2026-01-14 13:28:30.132 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" Namespace="calico-system" Pod="calico-kube-controllers-c96748b8f-wwf76" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0" Jan 14 13:28:30.263506 containerd[1649]: 2026-01-14 13:28:30.142 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" Namespace="calico-system" Pod="calico-kube-controllers-c96748b8f-wwf76" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0", GenerateName:"calico-kube-controllers-c96748b8f-", Namespace:"calico-system", SelfLink:"", UID:"1356d1d1-69e1-470e-955d-5a3a9ab090a6", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c96748b8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411", Pod:"calico-kube-controllers-c96748b8f-wwf76", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif09eb7837d8", MAC:"a6:e1:d4:9d:6b:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:30.263506 containerd[1649]: 2026-01-14 13:28:30.224 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" Namespace="calico-system" Pod="calico-kube-controllers-c96748b8f-wwf76" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c96748b8f--wwf76-eth0" Jan 14 13:28:30.494280 containerd[1649]: time="2026-01-14T13:28:30.491808616Z" level=info msg="StartContainer for \"9425205d2d9287b11d5d592c1ad975c37b18c7eb435330c7bb05a99251cec866\" returns successfully" Jan 14 13:28:30.623498 containerd[1649]: time="2026-01-14T13:28:30.614308339Z" level=info msg="connecting to shim d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411" address="unix:///run/containerd/s/c7318f8bfda8c4ec55b6c7f3868c3f5a4d92cf0ee0aa2d80670f01f0f096c2f9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:28:30.673035 kubelet[2886]: E0114 13:28:30.660838 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:30.741529 kubelet[2886]: E0114 13:28:30.740447 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:28:30.747492 kubelet[2886]: E0114 13:28:30.746359 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:28:31.141640 kernel: kauditd_printk_skb: 196 callbacks suppressed Jan 14 13:28:31.141907 kernel: audit: type=1325 audit(1768397311.086:648): table=filter:123 family=2 entries=20 op=nft_register_rule pid=4736 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:31.086000 audit[4736]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4736 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:31.173042 kubelet[2886]: I0114 13:28:31.171608 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gvhhh" podStartSLOduration=52.17158816 podStartE2EDuration="52.17158816s" podCreationTimestamp="2026-01-14 13:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:28:30.795463115 +0000 UTC m=+57.233084563" watchObservedRunningTime="2026-01-14 13:28:31.17158816 +0000 UTC m=+57.609209577" Jan 14 13:28:31.086000 audit[4736]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff12b763f0 a2=0 a3=7fff12b763dc items=0 ppid=3003 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:31.172799 systemd[1]: Started cri-containerd-d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411.scope - libcontainer container d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411. Jan 14 13:28:31.268380 kernel: audit: type=1300 audit(1768397311.086:648): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff12b763f0 a2=0 a3=7fff12b763dc items=0 ppid=3003 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:31.086000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:31.375412 kernel: audit: type=1327 audit(1768397311.086:648): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:31.375519 kernel: audit: type=1325 audit(1768397311.166:649): table=nat:124 family=2 entries=14 op=nft_register_rule pid=4736 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:31.166000 audit[4736]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4736 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:31.166000 audit[4736]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff12b763f0 a2=0 a3=0 items=0 ppid=3003 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:31.447347 kernel: audit: type=1300 audit(1768397311.166:649): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff12b763f0 a2=0 a3=0 items=0 ppid=3003 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:31.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:31.483835 kernel: audit: type=1327 audit(1768397311.166:649): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:31.724416 kubelet[2886]: E0114 13:28:31.723634 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:31.737861 kubelet[2886]: E0114 13:28:31.733440 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:28:31.760926 kubelet[2886]: E0114 13:28:31.760578 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:28:31.796427 systemd-networkd[1422]: vxlan.calico: Link UP Jan 14 13:28:31.796523 systemd-networkd[1422]: vxlan.calico: Gained carrier Jan 14 13:28:31.982000 audit[4754]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4754 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:32.027557 kernel: audit: type=1325 audit(1768397311.982:650): table=filter:125 family=2 entries=20 op=nft_register_rule pid=4754 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:31.982000 audit[4754]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc343a120 a2=0 a3=7ffcc343a10c items=0 ppid=3003 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.058377 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:28:32.114550 systemd-networkd[1422]: calif09eb7837d8: Gained IPv6LL Jan 14 13:28:32.128591 kernel: audit: type=1300 audit(1768397311.982:650): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc343a120 a2=0 a3=7ffcc343a10c items=0 ppid=3003 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.151391 kernel: audit: type=1327 audit(1768397311.982:650): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:32.151476 kernel: audit: type=1334 audit(1768397312.009:651): prog-id=218 op=LOAD Jan 14 13:28:31.982000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:32.009000 audit: BPF prog-id=218 op=LOAD Jan 14 13:28:32.010000 audit: BPF prog-id=219 op=LOAD Jan 14 13:28:32.010000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f2238 a2=98 a3=0 items=0 ppid=4709 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431663530626364646138656366313137336466646330646263623735 Jan 14 13:28:32.010000 audit: BPF prog-id=219 op=UNLOAD Jan 14 13:28:32.010000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4709 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431663530626364646138656366313137336466646330646263623735 Jan 14 13:28:32.010000 audit: BPF prog-id=220 op=LOAD Jan 14 13:28:32.010000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f2488 a2=98 a3=0 items=0 ppid=4709 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431663530626364646138656366313137336466646330646263623735 Jan 14 13:28:32.013000 audit: BPF prog-id=221 op=LOAD Jan 14 13:28:32.013000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001f2218 a2=98 a3=0 items=0 ppid=4709 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431663530626364646138656366313137336466646330646263623735 Jan 14 13:28:32.013000 audit: BPF prog-id=221 op=UNLOAD Jan 14 13:28:32.013000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4709 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431663530626364646138656366313137336466646330646263623735 Jan 14 13:28:32.013000 audit: BPF prog-id=220 op=UNLOAD Jan 14 13:28:32.013000 audit[4725]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4709 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431663530626364646138656366313137336466646330646263623735 Jan 14 13:28:32.013000 audit: BPF prog-id=222 op=LOAD Jan 14 13:28:32.013000 audit[4725]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f26e8 a2=98 a3=0 items=0 ppid=4709 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431663530626364646138656366313137336466646330646263623735 Jan 14 13:28:32.058000 audit[4754]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4754 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:32.058000 audit[4754]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcc343a120 a2=0 a3=0 items=0 ppid=3003 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:32.707000 audit: BPF prog-id=223 op=LOAD Jan 14 13:28:32.707000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1ec808a0 a2=98 a3=0 items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.707000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.707000 audit: BPF prog-id=223 op=UNLOAD Jan 14 13:28:32.707000 audit[4770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe1ec80870 a3=0 items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.707000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.707000 audit: BPF prog-id=224 op=LOAD Jan 14 13:28:32.707000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1ec806b0 a2=94 a3=54428f items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.707000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.707000 audit: BPF prog-id=224 op=UNLOAD Jan 14 13:28:32.707000 audit[4770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe1ec806b0 a2=94 a3=54428f items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.707000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.707000 audit: BPF prog-id=225 op=LOAD Jan 14 13:28:32.707000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1ec806e0 a2=94 a3=2 items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.707000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.707000 audit: BPF prog-id=225 op=UNLOAD Jan 14 13:28:32.707000 audit[4770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe1ec806e0 a2=0 a3=2 items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.707000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.709000 audit: BPF prog-id=226 op=LOAD Jan 14 13:28:32.709000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe1ec80490 a2=94 a3=4 items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.709000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.709000 audit: BPF prog-id=226 op=UNLOAD Jan 14 13:28:32.709000 audit[4770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe1ec80490 a2=94 a3=4 items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.709000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.709000 audit: BPF prog-id=227 op=LOAD Jan 14 13:28:32.709000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe1ec80590 a2=94 a3=7ffe1ec80710 items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.709000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.709000 audit: BPF prog-id=227 op=UNLOAD Jan 14 13:28:32.709000 audit[4770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe1ec80590 a2=0 a3=7ffe1ec80710 items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.709000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.712000 audit: BPF prog-id=228 op=LOAD Jan 14 13:28:32.712000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe1ec7fcc0 a2=94 a3=2 items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.712000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.712000 audit: BPF prog-id=228 op=UNLOAD Jan 14 13:28:32.712000 audit[4770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe1ec7fcc0 a2=0 a3=2 items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.712000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.712000 audit: BPF prog-id=229 op=LOAD Jan 14 13:28:32.712000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe1ec7fdc0 a2=94 a3=30 items=0 ppid=4207 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.712000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:28:32.829068 kubelet[2886]: E0114 13:28:32.829038 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:32.856000 audit: BPF prog-id=230 op=LOAD Jan 14 13:28:32.856000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf5c93c50 a2=98 a3=0 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.856000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:32.857000 audit: BPF prog-id=230 op=UNLOAD Jan 14 13:28:32.857000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdf5c93c20 a3=0 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.857000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:32.857000 audit: BPF prog-id=231 op=LOAD Jan 14 13:28:32.857000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdf5c93a40 a2=94 a3=54428f items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.857000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:32.857000 audit: BPF prog-id=231 op=UNLOAD Jan 14 13:28:32.857000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdf5c93a40 a2=94 a3=54428f items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.857000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:32.857000 audit: BPF prog-id=232 op=LOAD Jan 14 13:28:32.857000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdf5c93a70 a2=94 a3=2 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.857000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:32.857000 audit: BPF prog-id=232 op=UNLOAD Jan 14 13:28:32.857000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdf5c93a70 a2=0 a3=2 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:32.857000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.141363 containerd[1649]: time="2026-01-14T13:28:33.126373471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c96748b8f-wwf76,Uid:1356d1d1-69e1-470e-955d-5a3a9ab090a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1f50bcdda8ecf1173dfdc0dbcb758da04ca7f7515a908514c06562d70a0a411\"" Jan 14 13:28:33.165030 containerd[1649]: time="2026-01-14T13:28:33.161495379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:28:33.219895 systemd-networkd[1422]: vxlan.calico: Gained IPv6LL Jan 14 13:28:33.252000 audit[4787]: NETFILTER_CFG table=filter:127 family=2 entries=17 op=nft_register_rule pid=4787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:33.252000 audit[4787]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffeed63c10 a2=0 a3=7fffeed63bfc items=0 ppid=3003 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:33.271000 audit[4787]: NETFILTER_CFG table=nat:128 family=2 entries=35 op=nft_register_chain pid=4787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:33.271000 audit[4787]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffeed63c10 a2=0 a3=7fffeed63bfc items=0 ppid=3003 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.271000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:33.341801 containerd[1649]: time="2026-01-14T13:28:33.340634956Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:33.371398 containerd[1649]: time="2026-01-14T13:28:33.370841784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:28:33.371398 containerd[1649]: time="2026-01-14T13:28:33.370950215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:33.376602 kubelet[2886]: E0114 13:28:33.375930 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:28:33.383301 kubelet[2886]: E0114 13:28:33.376071 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:28:33.383301 kubelet[2886]: E0114 13:28:33.382497 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjvdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c96748b8f-wwf76_calico-system(1356d1d1-69e1-470e-955d-5a3a9ab090a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:33.386388 kubelet[2886]: E0114 13:28:33.386329 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:28:33.816279 kubelet[2886]: E0114 13:28:33.815513 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:33.825495 kubelet[2886]: E0114 13:28:33.825460 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:28:33.877051 containerd[1649]: time="2026-01-14T13:28:33.877005112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckktd,Uid:8200b33d-eb45-4c93-98d1-0c3029a31280,Namespace:calico-system,Attempt:0,}" Jan 14 13:28:33.881000 audit: BPF prog-id=233 op=LOAD Jan 14 13:28:33.881000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdf5c93930 a2=94 a3=1 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.881000 audit: BPF prog-id=233 op=UNLOAD Jan 14 13:28:33.881000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdf5c93930 a2=94 a3=1 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.902000 audit: BPF prog-id=234 op=LOAD Jan 14 13:28:33.902000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdf5c93920 a2=94 a3=4 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.902000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.909000 audit: BPF prog-id=234 op=UNLOAD Jan 14 13:28:33.909000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdf5c93920 a2=0 a3=4 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.909000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.909000 audit: BPF prog-id=235 op=LOAD Jan 14 13:28:33.909000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdf5c93780 a2=94 a3=5 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.909000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.909000 audit: BPF prog-id=235 op=UNLOAD Jan 14 13:28:33.909000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdf5c93780 a2=0 a3=5 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.909000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.909000 audit: BPF prog-id=236 op=LOAD Jan 14 13:28:33.909000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdf5c939a0 a2=94 a3=6 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.909000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.909000 audit: BPF prog-id=236 op=UNLOAD Jan 14 13:28:33.909000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdf5c939a0 a2=0 a3=6 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.909000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.913000 audit: BPF prog-id=237 op=LOAD Jan 14 13:28:33.913000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdf5c93150 a2=94 a3=88 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.913000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.913000 audit: BPF prog-id=238 op=LOAD Jan 14 13:28:33.913000 audit[4779]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffdf5c92fd0 a2=94 a3=2 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.913000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.913000 audit: BPF prog-id=238 op=UNLOAD Jan 14 13:28:33.913000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffdf5c93000 a2=0 a3=7ffdf5c93100 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.913000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:33.915000 audit: BPF prog-id=237 op=UNLOAD Jan 14 13:28:33.915000 audit[4779]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2175dd10 a2=0 a3=6e6c61d0ebe0b917 items=0 ppid=4207 pid=4779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:33.915000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:28:34.055000 audit: BPF prog-id=229 op=UNLOAD Jan 14 13:28:34.055000 audit[4207]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0012160c0 a2=0 a3=0 items=0 ppid=4191 pid=4207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:34.055000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 13:28:34.826882 kubelet[2886]: E0114 13:28:34.826527 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:28:35.401567 systemd-networkd[1422]: calicc07670934e: Link UP Jan 14 13:28:35.402054 systemd-networkd[1422]: calicc07670934e: Gained carrier Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:34.410 [INFO][4790] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ckktd-eth0 csi-node-driver- calico-system 8200b33d-eb45-4c93-98d1-0c3029a31280 727 0 2026-01-14 13:27:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ckktd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicc07670934e [] [] }} ContainerID="74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" Namespace="calico-system" Pod="csi-node-driver-ckktd" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckktd-" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:34.410 [INFO][4790] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" Namespace="calico-system" Pod="csi-node-driver-ckktd" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckktd-eth0" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:34.756 [INFO][4810] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" HandleID="k8s-pod-network.74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" Workload="localhost-k8s-csi--node--driver--ckktd-eth0" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:34.757 [INFO][4810] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" HandleID="k8s-pod-network.74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" Workload="localhost-k8s-csi--node--driver--ckktd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ckktd", "timestamp":"2026-01-14 13:28:34.756799655 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:34.758 [INFO][4810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:34.759 [INFO][4810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:34.761 [INFO][4810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:34.860 [INFO][4810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" host="localhost" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:35.081 [INFO][4810] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:35.171 [INFO][4810] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:35.208 [INFO][4810] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:35.241 [INFO][4810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:35.256 [INFO][4810] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" host="localhost" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:35.273 [INFO][4810] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0 Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:35.298 [INFO][4810] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" host="localhost" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:35.348 [INFO][4810] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" host="localhost" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:35.357 [INFO][4810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" host="localhost" Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:35.357 [INFO][4810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:28:35.565007 containerd[1649]: 2026-01-14 13:28:35.357 [INFO][4810] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" HandleID="k8s-pod-network.74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" Workload="localhost-k8s-csi--node--driver--ckktd-eth0" Jan 14 13:28:35.578421 containerd[1649]: 2026-01-14 13:28:35.381 [INFO][4790] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" Namespace="calico-system" Pod="csi-node-driver-ckktd" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckktd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ckktd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8200b33d-eb45-4c93-98d1-0c3029a31280", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ckktd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicc07670934e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:35.578421 containerd[1649]: 2026-01-14 13:28:35.381 [INFO][4790] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" Namespace="calico-system" Pod="csi-node-driver-ckktd" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckktd-eth0" Jan 14 13:28:35.578421 containerd[1649]: 2026-01-14 13:28:35.381 [INFO][4790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc07670934e ContainerID="74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" Namespace="calico-system" Pod="csi-node-driver-ckktd" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckktd-eth0" Jan 14 13:28:35.578421 containerd[1649]: 2026-01-14 13:28:35.392 [INFO][4790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" Namespace="calico-system" Pod="csi-node-driver-ckktd" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckktd-eth0" Jan 14 13:28:35.578421 containerd[1649]: 2026-01-14 13:28:35.421 [INFO][4790] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" Namespace="calico-system" Pod="csi-node-driver-ckktd" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckktd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ckktd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8200b33d-eb45-4c93-98d1-0c3029a31280", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0", Pod:"csi-node-driver-ckktd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicc07670934e", MAC:"6a:93:94:c3:d7:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:35.578421 containerd[1649]: 2026-01-14 13:28:35.526 [INFO][4790] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" Namespace="calico-system" Pod="csi-node-driver-ckktd" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckktd-eth0" Jan 14 13:28:35.602000 audit[4840]: NETFILTER_CFG table=mangle:129 family=2 entries=16 op=nft_register_chain pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:28:35.602000 audit[4840]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffffa6ce320 a2=0 a3=7ffffa6ce30c items=0 ppid=4207 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:35.602000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:28:35.636000 audit[4838]: NETFILTER_CFG table=nat:130 family=2 entries=15 op=nft_register_chain pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:28:35.636000 audit[4838]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff54548ed0 a2=0 a3=7fff54548ebc items=0 ppid=4207 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:35.636000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:28:35.905484 containerd[1649]: time="2026-01-14T13:28:35.905441782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6f8f57b-kb2gl,Uid:fc822dd2-4a0b-4df8-969d-8ce5598b7069,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:28:35.939000 audit[4849]: NETFILTER_CFG table=raw:131 family=2 entries=21 op=nft_register_chain pid=4849 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:28:35.939000 audit[4849]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffeb7acb3e0 a2=0 a3=7ffeb7acb3cc items=0 ppid=4207 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:35.939000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:28:35.952298 containerd[1649]: time="2026-01-14T13:28:35.952055509Z" level=info msg="connecting to shim 74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0" address="unix:///run/containerd/s/0ca19ed6c835e8f824e2706627a4d0aeb11168c293e51bf6d5b6765d9489ced2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:28:35.977000 audit[4841]: NETFILTER_CFG table=filter:132 family=2 entries=234 op=nft_register_chain pid=4841 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:28:35.977000 audit[4841]: SYSCALL arch=c000003e syscall=46 success=yes exit=137032 a0=3 a1=7ffd767c7690 a2=0 a3=55603eb18000 items=0 ppid=4207 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:35.977000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:28:36.339356 systemd[1]: Started cri-containerd-74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0.scope - libcontainer container 74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0. Jan 14 13:28:36.598378 kernel: kauditd_printk_skb: 138 callbacks suppressed Jan 14 13:28:36.603070 kernel: audit: type=1334 audit(1768397316.563:698): prog-id=239 op=LOAD Jan 14 13:28:36.563000 audit: BPF prog-id=239 op=LOAD Jan 14 13:28:36.607000 audit: BPF prog-id=240 op=LOAD Jan 14 13:28:36.706892 kernel: audit: type=1334 audit(1768397316.607:699): prog-id=240 op=LOAD Jan 14 13:28:36.706983 kernel: audit: type=1300 audit(1768397316.607:699): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4864 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:36.607000 audit[4886]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4864 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:36.706847 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:28:36.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613238396332353561366366666366353336373937666237336336 Jan 14 13:28:36.781334 kernel: audit: type=1327 audit(1768397316.607:699): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613238396332353561366366666366353336373937666237336336 Jan 14 13:28:36.607000 audit: BPF prog-id=240 op=UNLOAD Jan 14 13:28:36.809291 kernel: audit: type=1334 audit(1768397316.607:700): prog-id=240 op=UNLOAD Jan 14 13:28:36.607000 audit[4886]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4864 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:36.978257 kernel: audit: type=1300 audit(1768397316.607:700): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4864 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:36.978409 kernel: audit: type=1327 audit(1768397316.607:700): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613238396332353561366366666366353336373937666237336336 Jan 14 13:28:36.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613238396332353561366366666366353336373937666237336336 Jan 14 13:28:36.978569 containerd[1649]: time="2026-01-14T13:28:36.935853122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d7kbj,Uid:d4e5f128-84cf-45f6-bd4e-05162a204a27,Namespace:kube-system,Attempt:0,}" Jan 14 13:28:36.979395 kubelet[2886]: E0114 13:28:36.903529 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:36.616000 audit: BPF prog-id=241 op=LOAD Jan 14 13:28:37.028055 kernel: audit: type=1334 audit(1768397316.616:701): prog-id=241 op=LOAD Jan 14 13:28:36.616000 audit[4886]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4864 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:37.055582 containerd[1649]: time="2026-01-14T13:28:37.052964430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckktd,Uid:8200b33d-eb45-4c93-98d1-0c3029a31280,Namespace:calico-system,Attempt:0,} returns sandbox id \"74a289c255a6cffcf536797fb73c698ddf64b75bd5db02298baf3edeabb23de0\"" Jan 14 13:28:37.084423 containerd[1649]: time="2026-01-14T13:28:37.082260039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:28:37.137520 kernel: audit: type=1300 audit(1768397316.616:701): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4864 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:36.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613238396332353561366366666366353336373937666237336336 Jan 14 13:28:37.203444 kernel: audit: type=1327 audit(1768397316.616:701): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613238396332353561366366666366353336373937666237336336 Jan 14 13:28:36.616000 audit: BPF prog-id=242 op=LOAD Jan 14 13:28:36.616000 audit[4886]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4864 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:36.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613238396332353561366366666366353336373937666237336336 Jan 14 13:28:36.616000 audit: BPF prog-id=242 op=UNLOAD Jan 14 13:28:36.616000 audit[4886]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4864 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:36.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613238396332353561366366666366353336373937666237336336 Jan 14 13:28:36.616000 audit: BPF prog-id=241 op=UNLOAD Jan 14 13:28:36.616000 audit[4886]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4864 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:36.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613238396332353561366366666366353336373937666237336336 Jan 14 13:28:36.616000 audit: BPF prog-id=243 op=LOAD Jan 14 13:28:36.616000 audit[4886]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4864 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:36.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734613238396332353561366366666366353336373937666237336336 Jan 14 13:28:36.708000 audit[4917]: NETFILTER_CFG table=filter:133 family=2 entries=58 op=nft_register_chain pid=4917 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:28:36.708000 audit[4917]: SYSCALL arch=c000003e syscall=46 success=yes exit=27180 a0=3 a1=7ffdf7105f70 a2=0 a3=7ffdf7105f5c items=0 ppid=4207 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:36.708000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:28:37.323309 systemd-networkd[1422]: calicc07670934e: Gained IPv6LL Jan 14 13:28:37.327351 systemd-networkd[1422]: califa1bbd310fd: Link UP Jan 14 13:28:37.329017 systemd-networkd[1422]: califa1bbd310fd: Gained carrier Jan 14 13:28:37.342996 containerd[1649]: time="2026-01-14T13:28:37.341622747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:37.382298 containerd[1649]: time="2026-01-14T13:28:37.381011960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:28:37.382298 containerd[1649]: time="2026-01-14T13:28:37.381401984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:37.394296 kubelet[2886]: E0114 13:28:37.389517 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:28:37.394296 kubelet[2886]: E0114 13:28:37.389568 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:28:37.394534 kubelet[2886]: E0114 13:28:37.394481 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4xm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:36.522 [INFO][4865] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0 calico-apiserver-68b6f8f57b- calico-apiserver fc822dd2-4a0b-4df8-969d-8ce5598b7069 845 0 2026-01-14 13:27:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68b6f8f57b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68b6f8f57b-kb2gl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califa1bbd310fd [] [] }} ContainerID="08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-kb2gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:36.523 [INFO][4865] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-kb2gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:36.829 [INFO][4912] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" HandleID="k8s-pod-network.08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" Workload="localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:36.830 [INFO][4912] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" HandleID="k8s-pod-network.08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" Workload="localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000507610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68b6f8f57b-kb2gl", "timestamp":"2026-01-14 13:28:36.829572474 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:36.830 [INFO][4912] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:36.830 [INFO][4912] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:36.830 [INFO][4912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:36.883 [INFO][4912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" host="localhost" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:36.935 [INFO][4912] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:37.012 [INFO][4912] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:37.047 [INFO][4912] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:37.072 [INFO][4912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:37.079 [INFO][4912] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" host="localhost" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:37.104 [INFO][4912] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738 Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:37.193 [INFO][4912] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" host="localhost" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:37.251 [INFO][4912] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" host="localhost" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:37.254 [INFO][4912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" host="localhost" Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:37.254 [INFO][4912] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:28:37.433309 containerd[1649]: 2026-01-14 13:28:37.254 [INFO][4912] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" HandleID="k8s-pod-network.08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" Workload="localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0" Jan 14 13:28:37.440863 containerd[1649]: 2026-01-14 13:28:37.272 [INFO][4865] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-kb2gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0", GenerateName:"calico-apiserver-68b6f8f57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc822dd2-4a0b-4df8-969d-8ce5598b7069", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b6f8f57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68b6f8f57b-kb2gl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa1bbd310fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:37.440863 containerd[1649]: 2026-01-14 13:28:37.272 [INFO][4865] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-kb2gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0" Jan 14 13:28:37.440863 containerd[1649]: 2026-01-14 13:28:37.272 [INFO][4865] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa1bbd310fd ContainerID="08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-kb2gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0" Jan 14 13:28:37.440863 containerd[1649]: 2026-01-14 13:28:37.322 [INFO][4865] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-kb2gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0" Jan 14 13:28:37.440863 containerd[1649]: 2026-01-14 13:28:37.332 [INFO][4865] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-kb2gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0", GenerateName:"calico-apiserver-68b6f8f57b-", Namespace:"calico-apiserver", SelfLink:"", UID:"fc822dd2-4a0b-4df8-969d-8ce5598b7069", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68b6f8f57b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738", Pod:"calico-apiserver-68b6f8f57b-kb2gl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa1bbd310fd", MAC:"ce:a8:1b:56:d8:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:37.440863 containerd[1649]: 2026-01-14 13:28:37.407 [INFO][4865] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" Namespace="calico-apiserver" Pod="calico-apiserver-68b6f8f57b-kb2gl" WorkloadEndpoint="localhost-k8s-calico--apiserver--68b6f8f57b--kb2gl-eth0" Jan 14 13:28:37.458375 containerd[1649]: time="2026-01-14T13:28:37.454411806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:28:37.650313 containerd[1649]: time="2026-01-14T13:28:37.642623943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:37.681799 containerd[1649]: time="2026-01-14T13:28:37.681633175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:28:37.684976 containerd[1649]: time="2026-01-14T13:28:37.684506612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:37.690889 kubelet[2886]: E0114 13:28:37.690426 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:28:37.690889 kubelet[2886]: E0114 13:28:37.690483 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:28:37.690889 kubelet[2886]: E0114 13:28:37.690616 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4xm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:37.705301 kubelet[2886]: E0114 13:28:37.702465 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:28:37.728000 audit[4951]: NETFILTER_CFG table=filter:134 family=2 entries=59 op=nft_register_chain pid=4951 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:28:37.728000 audit[4951]: SYSCALL arch=c000003e syscall=46 success=yes exit=29476 a0=3 a1=7ffd21ccaa10 a2=0 a3=7ffd21cca9fc items=0 ppid=4207 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:37.728000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:28:37.829375 containerd[1649]: time="2026-01-14T13:28:37.826538942Z" level=info msg="connecting to shim 08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738" address="unix:///run/containerd/s/2a50b85b78fca0d272a4786bf0380a65b569feda8ab0f093ef549afe36b043d6" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:28:37.996374 kubelet[2886]: E0114 13:28:37.982638 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:28:38.215597 systemd[1]: Started cri-containerd-08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738.scope - libcontainer container 08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738. Jan 14 13:28:38.541000 audit: BPF prog-id=244 op=LOAD Jan 14 13:28:38.562000 audit: BPF prog-id=245 op=LOAD Jan 14 13:28:38.562000 audit[4981]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4963 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:38.562000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643936646362333638326230346565313961336264636434313963 Jan 14 13:28:38.564000 audit: BPF prog-id=245 op=UNLOAD Jan 14 13:28:38.564000 audit[4981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4963 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:38.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643936646362333638326230346565313961336264636434313963 Jan 14 13:28:38.566000 audit: BPF prog-id=246 op=LOAD Jan 14 13:28:38.566000 audit[4981]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4963 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:38.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643936646362333638326230346565313961336264636434313963 Jan 14 13:28:38.566000 audit: BPF prog-id=247 op=LOAD Jan 14 13:28:38.566000 audit[4981]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4963 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:38.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643936646362333638326230346565313961336264636434313963 Jan 14 13:28:38.566000 audit: BPF prog-id=247 op=UNLOAD Jan 14 13:28:38.566000 audit[4981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4963 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:38.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643936646362333638326230346565313961336264636434313963 Jan 14 13:28:38.566000 audit: BPF prog-id=246 op=UNLOAD Jan 14 13:28:38.566000 audit[4981]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4963 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:38.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643936646362333638326230346565313961336264636434313963 Jan 14 13:28:38.567000 audit: BPF prog-id=248 op=LOAD Jan 14 13:28:38.567000 audit[4981]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4963 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:38.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038643936646362333638326230346565313961336264636434313963 Jan 14 13:28:38.576602 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:28:38.827581 systemd-networkd[1422]: calif52ca73eaf5: Link UP Jan 14 13:28:38.835604 systemd-networkd[1422]: calif52ca73eaf5: Gained carrier Jan 14 13:28:38.917343 systemd-networkd[1422]: califa1bbd310fd: Gained IPv6LL Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:37.786 [INFO][4926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0 coredns-674b8bbfcf- kube-system d4e5f128-84cf-45f6-bd4e-05162a204a27 839 0 2026-01-14 13:27:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-d7kbj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif52ca73eaf5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-d7kbj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d7kbj-" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:37.790 [INFO][4926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-d7kbj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.339 [INFO][4968] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" HandleID="k8s-pod-network.e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" Workload="localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.339 [INFO][4968] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" HandleID="k8s-pod-network.e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" Workload="localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038d230), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-d7kbj", "timestamp":"2026-01-14 13:28:38.339455301 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.339 [INFO][4968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.339 [INFO][4968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.339 [INFO][4968] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.493 [INFO][4968] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" host="localhost" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.552 [INFO][4968] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.610 [INFO][4968] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.621 [INFO][4968] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.634 [INFO][4968] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.637 [INFO][4968] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" host="localhost" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.671 [INFO][4968] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1 Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.711 [INFO][4968] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" host="localhost" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.781 [INFO][4968] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" host="localhost" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.781 [INFO][4968] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" host="localhost" Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.781 [INFO][4968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:28:38.970950 containerd[1649]: 2026-01-14 13:28:38.781 [INFO][4968] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" HandleID="k8s-pod-network.e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" Workload="localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0" Jan 14 13:28:38.976653 containerd[1649]: 2026-01-14 13:28:38.798 [INFO][4926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-d7kbj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d4e5f128-84cf-45f6-bd4e-05162a204a27", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-d7kbj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif52ca73eaf5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:38.976653 containerd[1649]: 2026-01-14 13:28:38.807 [INFO][4926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-d7kbj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0" Jan 14 13:28:38.976653 containerd[1649]: 2026-01-14 13:28:38.808 [INFO][4926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif52ca73eaf5 ContainerID="e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-d7kbj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0" Jan 14 13:28:38.976653 containerd[1649]: 2026-01-14 13:28:38.828 [INFO][4926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-d7kbj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0" Jan 14 13:28:38.976653 containerd[1649]: 2026-01-14 13:28:38.843 [INFO][4926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-d7kbj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d4e5f128-84cf-45f6-bd4e-05162a204a27", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1", Pod:"coredns-674b8bbfcf-d7kbj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif52ca73eaf5", MAC:"76:5d:9a:98:8b:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:28:38.976653 containerd[1649]: 2026-01-14 13:28:38.913 [INFO][4926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" Namespace="kube-system" Pod="coredns-674b8bbfcf-d7kbj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--d7kbj-eth0" Jan 14 13:28:39.019515 kubelet[2886]: E0114 13:28:39.019459 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:28:39.211654 containerd[1649]: time="2026-01-14T13:28:39.208660308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68b6f8f57b-kb2gl,Uid:fc822dd2-4a0b-4df8-969d-8ce5598b7069,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"08d96dcb3682b04ee19a3bdcd419c8415f44a640c7700fbcd4234a08ce12f738\"" Jan 14 13:28:39.237969 containerd[1649]: time="2026-01-14T13:28:39.237243163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:28:39.268000 audit[5024]: NETFILTER_CFG table=filter:135 family=2 entries=36 op=nft_register_chain pid=5024 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:28:39.268000 audit[5024]: SYSCALL arch=c000003e syscall=46 success=yes exit=19176 a0=3 a1=7ffcfe7d0df0 a2=0 a3=7ffcfe7d0ddc items=0 ppid=4207 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:39.268000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:28:39.287533 containerd[1649]: time="2026-01-14T13:28:39.287491328Z" level=info msg="connecting to shim e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1" address="unix:///run/containerd/s/9a99491324a61a3a2c462a97f3d0a8d6b8b9b109e0f3b4888b5594db63984103" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:28:39.376513 containerd[1649]: time="2026-01-14T13:28:39.359507819Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:39.398335 containerd[1649]: time="2026-01-14T13:28:39.397834585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:28:39.398335 containerd[1649]: time="2026-01-14T13:28:39.398043213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:39.400439 kubelet[2886]: E0114 13:28:39.399454 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:28:39.400439 kubelet[2886]: E0114 13:28:39.399502 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:28:39.401968 kubelet[2886]: E0114 13:28:39.399656 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbd5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b6f8f57b-kb2gl_calico-apiserver(fc822dd2-4a0b-4df8-969d-8ce5598b7069): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:39.411338 kubelet[2886]: E0114 13:28:39.411304 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:28:39.530047 systemd[1]: Started cri-containerd-e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1.scope - libcontainer container e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1. Jan 14 13:28:39.662000 audit: BPF prog-id=249 op=LOAD Jan 14 13:28:39.683000 audit: BPF prog-id=250 op=LOAD Jan 14 13:28:39.683000 audit[5042]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=5031 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:39.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536613266633761326633663335373435363464646464623030323665 Jan 14 13:28:39.683000 audit: BPF prog-id=250 op=UNLOAD Jan 14 13:28:39.683000 audit[5042]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5031 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:39.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536613266633761326633663335373435363464646464623030323665 Jan 14 13:28:39.683000 audit: BPF prog-id=251 op=LOAD Jan 14 13:28:39.683000 audit[5042]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=5031 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:39.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536613266633761326633663335373435363464646464623030323665 Jan 14 13:28:39.683000 audit: BPF prog-id=252 op=LOAD Jan 14 13:28:39.683000 audit[5042]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=5031 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:39.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536613266633761326633663335373435363464646464623030323665 Jan 14 13:28:39.683000 audit: BPF prog-id=252 op=UNLOAD Jan 14 13:28:39.683000 audit[5042]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5031 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:39.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536613266633761326633663335373435363464646464623030323665 Jan 14 13:28:39.683000 audit: BPF prog-id=251 op=UNLOAD Jan 14 13:28:39.683000 audit[5042]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5031 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:39.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536613266633761326633663335373435363464646464623030323665 Jan 14 13:28:39.683000 audit: BPF prog-id=253 op=LOAD Jan 14 13:28:39.683000 audit[5042]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=5031 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:39.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536613266633761326633663335373435363464646464623030323665 Jan 14 13:28:39.710481 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 13:28:40.008895 kubelet[2886]: E0114 13:28:40.008616 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:28:40.148312 containerd[1649]: time="2026-01-14T13:28:40.143801562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d7kbj,Uid:d4e5f128-84cf-45f6-bd4e-05162a204a27,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1\"" Jan 14 13:28:40.148917 kubelet[2886]: E0114 13:28:40.144991 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:40.176290 containerd[1649]: time="2026-01-14T13:28:40.175281015Z" level=info msg="CreateContainer within sandbox \"e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:28:40.369588 containerd[1649]: time="2026-01-14T13:28:40.360539250Z" level=info msg="Container 0d169b56d7922ec07d29b21938d89c24724ea045f4d81c3fd18933da413e371e: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:28:40.443598 containerd[1649]: time="2026-01-14T13:28:40.435511663Z" level=info msg="CreateContainer within sandbox \"e6a2fc7a2f3f3574564ddddb0026ea8c6c2245f50164e12b62267c7999bb91e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d169b56d7922ec07d29b21938d89c24724ea045f4d81c3fd18933da413e371e\"" Jan 14 13:28:40.446000 audit[5069]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:40.446000 audit[5069]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4f70af90 a2=0 a3=7ffc4f70af7c items=0 ppid=3003 pid=5069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:40.446000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:40.465912 containerd[1649]: time="2026-01-14T13:28:40.464859119Z" level=info msg="StartContainer for \"0d169b56d7922ec07d29b21938d89c24724ea045f4d81c3fd18933da413e371e\"" Jan 14 13:28:40.474595 containerd[1649]: time="2026-01-14T13:28:40.474453536Z" level=info msg="connecting to shim 0d169b56d7922ec07d29b21938d89c24724ea045f4d81c3fd18933da413e371e" address="unix:///run/containerd/s/9a99491324a61a3a2c462a97f3d0a8d6b8b9b109e0f3b4888b5594db63984103" protocol=ttrpc version=3 Jan 14 13:28:40.479000 audit[5069]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=5069 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:40.479000 audit[5069]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc4f70af90 a2=0 a3=7ffc4f70af7c items=0 ppid=3003 pid=5069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:40.479000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:40.520033 systemd-networkd[1422]: calif52ca73eaf5: Gained IPv6LL Jan 14 13:28:40.632612 systemd[1]: Started cri-containerd-0d169b56d7922ec07d29b21938d89c24724ea045f4d81c3fd18933da413e371e.scope - libcontainer container 0d169b56d7922ec07d29b21938d89c24724ea045f4d81c3fd18933da413e371e. Jan 14 13:28:40.777000 audit: BPF prog-id=254 op=LOAD Jan 14 13:28:40.789000 audit: BPF prog-id=255 op=LOAD Jan 14 13:28:40.789000 audit[5070]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5031 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:40.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313639623536643739323265633037643239623231393338643839 Jan 14 13:28:40.790000 audit: BPF prog-id=255 op=UNLOAD Jan 14 13:28:40.790000 audit[5070]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5031 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313639623536643739323265633037643239623231393338643839 Jan 14 13:28:40.790000 audit: BPF prog-id=256 op=LOAD Jan 14 13:28:40.790000 audit[5070]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5031 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313639623536643739323265633037643239623231393338643839 Jan 14 13:28:40.790000 audit: BPF prog-id=257 op=LOAD Jan 14 13:28:40.790000 audit[5070]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5031 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313639623536643739323265633037643239623231393338643839 Jan 14 13:28:40.790000 audit: BPF prog-id=257 op=UNLOAD Jan 14 13:28:40.790000 audit[5070]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5031 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313639623536643739323265633037643239623231393338643839 Jan 14 13:28:40.791000 audit: BPF prog-id=256 op=UNLOAD Jan 14 13:28:40.791000 audit[5070]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5031 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:40.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313639623536643739323265633037643239623231393338643839 Jan 14 13:28:40.791000 audit: BPF prog-id=258 op=LOAD Jan 14 13:28:40.791000 audit[5070]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5031 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:40.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064313639623536643739323265633037643239623231393338643839 Jan 14 13:28:40.953538 containerd[1649]: time="2026-01-14T13:28:40.898638788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:28:41.030661 containerd[1649]: time="2026-01-14T13:28:41.030621251Z" level=info msg="StartContainer for \"0d169b56d7922ec07d29b21938d89c24724ea045f4d81c3fd18933da413e371e\" returns successfully" Jan 14 13:28:41.084292 containerd[1649]: time="2026-01-14T13:28:41.080033537Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:41.099423 kubelet[2886]: E0114 13:28:41.099382 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:28:41.114795 kubelet[2886]: E0114 13:28:41.105392 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:28:41.114795 kubelet[2886]: E0114 13:28:41.108483 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:28:41.114795 kubelet[2886]: E0114 13:28:41.108597 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3119e5cb9c374c7884796c10460fa4dc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8j4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d688f66-n8bg2_calico-system(2d3c1365-6a1f-45b8-8652-2b261d46979e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:41.124425 containerd[1649]: time="2026-01-14T13:28:41.099611908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:28:41.124425 containerd[1649]: time="2026-01-14T13:28:41.099628309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:41.131788 containerd[1649]: time="2026-01-14T13:28:41.129461765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:28:41.305421 containerd[1649]: time="2026-01-14T13:28:41.305307565Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:41.322392 containerd[1649]: time="2026-01-14T13:28:41.321400520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:28:41.322392 containerd[1649]: time="2026-01-14T13:28:41.321508921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:41.326021 kubelet[2886]: E0114 13:28:41.324590 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:28:41.326021 kubelet[2886]: E0114 13:28:41.324639 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:28:41.326021 kubelet[2886]: E0114 13:28:41.324902 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8j4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d688f66-n8bg2_calico-system(2d3c1365-6a1f-45b8-8652-2b261d46979e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:41.333996 kubelet[2886]: E0114 13:28:41.330515 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:28:41.670985 kernel: hrtimer: interrupt took 6082349 ns Jan 14 13:28:42.074011 kubelet[2886]: E0114 13:28:42.069636 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:42.185817 kubelet[2886]: I0114 13:28:42.181936 2886 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d7kbj" podStartSLOduration=63.181917305 podStartE2EDuration="1m3.181917305s" podCreationTimestamp="2026-01-14 13:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:28:42.145248873 +0000 UTC m=+68.582870290" watchObservedRunningTime="2026-01-14 13:28:42.181917305 +0000 UTC m=+68.619538712" Jan 14 13:28:42.425961 kernel: kauditd_printk_skb: 93 callbacks suppressed Jan 14 13:28:42.426281 kernel: audit: type=1325 audit(1768397322.404:735): table=filter:138 family=2 entries=14 op=nft_register_rule pid=5114 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:42.404000 audit[5114]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5114 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:42.404000 audit[5114]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff84acf210 a2=0 a3=7fff84acf1fc items=0 ppid=3003 pid=5114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:42.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:42.535389 kernel: audit: type=1300 audit(1768397322.404:735): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff84acf210 a2=0 a3=7fff84acf1fc items=0 ppid=3003 pid=5114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:42.535447 kernel: audit: type=1327 audit(1768397322.404:735): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:42.475000 audit[5114]: NETFILTER_CFG table=nat:139 family=2 entries=44 op=nft_register_rule pid=5114 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:42.568004 kernel: audit: type=1325 audit(1768397322.475:736): table=nat:139 family=2 entries=44 op=nft_register_rule pid=5114 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:42.475000 audit[5114]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff84acf210 a2=0 a3=7fff84acf1fc items=0 ppid=3003 pid=5114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:42.618289 kernel: audit: type=1300 audit(1768397322.475:736): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff84acf210 a2=0 a3=7fff84acf1fc items=0 ppid=3003 pid=5114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:42.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:42.655366 kernel: audit: type=1327 audit(1768397322.475:736): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:42.711000 audit[5116]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:42.762554 kernel: audit: type=1325 audit(1768397322.711:737): table=filter:140 family=2 entries=14 op=nft_register_rule pid=5116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:42.762663 kernel: audit: type=1300 audit(1768397322.711:737): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe66e9f700 a2=0 a3=7ffe66e9f6ec items=0 ppid=3003 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:42.711000 audit[5116]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe66e9f700 a2=0 a3=7ffe66e9f6ec items=0 ppid=3003 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:42.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:42.880349 kernel: audit: type=1327 audit(1768397322.711:737): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:42.890881 containerd[1649]: time="2026-01-14T13:28:42.890843541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:28:42.970000 audit[5116]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=5116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:43.036007 kernel: audit: type=1325 audit(1768397322.970:738): table=nat:141 family=2 entries=56 op=nft_register_chain pid=5116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:28:42.970000 audit[5116]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe66e9f700 a2=0 a3=7ffe66e9f6ec items=0 ppid=3003 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:42.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:28:43.084552 kubelet[2886]: E0114 13:28:43.083609 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:43.115974 containerd[1649]: time="2026-01-14T13:28:43.111397116Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:43.124461 containerd[1649]: time="2026-01-14T13:28:43.124022785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:28:43.124461 containerd[1649]: time="2026-01-14T13:28:43.124215763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:43.132276 kubelet[2886]: E0114 13:28:43.131008 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:28:43.132276 kubelet[2886]: E0114 13:28:43.131055 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:28:43.132276 kubelet[2886]: E0114 13:28:43.131428 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzzxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h2gf2_calico-system(97139d64-ebd5-495e-81ad-3f4aa4c54bfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:43.132844 kubelet[2886]: E0114 13:28:43.132612 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:28:44.099446 kubelet[2886]: E0114 13:28:44.097401 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:45.896923 containerd[1649]: time="2026-01-14T13:28:45.894367929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:28:46.031627 containerd[1649]: time="2026-01-14T13:28:46.031582559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:46.057505 containerd[1649]: time="2026-01-14T13:28:46.057451813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:28:46.057588 containerd[1649]: time="2026-01-14T13:28:46.057531952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:46.066665 kubelet[2886]: E0114 13:28:46.064970 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:28:46.066665 kubelet[2886]: E0114 13:28:46.065022 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:28:46.066665 kubelet[2886]: E0114 13:28:46.065391 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btjwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b6f8f57b-4vsgx_calico-apiserver(43c81015-17c1-4886-ba54-03a8237f3050): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:46.081621 kubelet[2886]: E0114 13:28:46.079468 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:28:46.891204 containerd[1649]: time="2026-01-14T13:28:46.890525239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:28:47.047626 containerd[1649]: time="2026-01-14T13:28:47.047578057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:47.064525 containerd[1649]: time="2026-01-14T13:28:47.062033345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:28:47.068434 containerd[1649]: time="2026-01-14T13:28:47.062045427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:47.072041 kubelet[2886]: E0114 13:28:47.072008 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:28:47.083487 kubelet[2886]: E0114 13:28:47.083457 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:28:47.083878 kubelet[2886]: E0114 13:28:47.083820 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjvdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c96748b8f-wwf76_calico-system(1356d1d1-69e1-470e-955d-5a3a9ab090a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:47.087057 kubelet[2886]: E0114 13:28:47.087029 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:28:52.894665 containerd[1649]: time="2026-01-14T13:28:52.888855763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:28:52.925011 kubelet[2886]: E0114 13:28:52.920431 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:28:53.018622 containerd[1649]: time="2026-01-14T13:28:53.016070544Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:53.039944 containerd[1649]: time="2026-01-14T13:28:53.036527763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:28:53.039944 containerd[1649]: time="2026-01-14T13:28:53.036638690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:53.040068 kubelet[2886]: E0114 13:28:53.037839 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:28:53.040068 kubelet[2886]: E0114 13:28:53.037884 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:28:53.040068 kubelet[2886]: E0114 13:28:53.038012 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4xm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:53.047838 containerd[1649]: time="2026-01-14T13:28:53.046994340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:28:53.186525 containerd[1649]: time="2026-01-14T13:28:53.185363835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:53.200663 containerd[1649]: time="2026-01-14T13:28:53.200490904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:28:53.200663 containerd[1649]: time="2026-01-14T13:28:53.200580802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:53.206511 kubelet[2886]: E0114 13:28:53.202295 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:28:53.206511 kubelet[2886]: E0114 13:28:53.203439 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:28:53.206511 kubelet[2886]: E0114 13:28:53.203571 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4xm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:53.206511 kubelet[2886]: E0114 13:28:53.206072 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:28:53.897677 kubelet[2886]: E0114 13:28:53.890963 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:54.910518 containerd[1649]: time="2026-01-14T13:28:54.910471740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:28:55.084576 containerd[1649]: time="2026-01-14T13:28:55.084528912Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:28:55.089672 containerd[1649]: time="2026-01-14T13:28:55.089638402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:28:55.089971 containerd[1649]: time="2026-01-14T13:28:55.089951854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:28:55.092894 kubelet[2886]: E0114 13:28:55.092846 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:28:55.097357 kubelet[2886]: E0114 13:28:55.096484 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:28:55.101065 kubelet[2886]: E0114 13:28:55.099545 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbd5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b6f8f57b-kb2gl_calico-apiserver(fc822dd2-4a0b-4df8-969d-8ce5598b7069): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:28:55.104072 kubelet[2886]: E0114 13:28:55.103429 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:28:55.781894 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:36366.service - OpenSSH per-connection server daemon (10.0.0.1:36366). Jan 14 13:28:55.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.26:22-10.0.0.1:36366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:28:55.871071 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 14 13:28:55.871600 kernel: audit: type=1130 audit(1768397335.781:739): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.26:22-10.0.0.1:36366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:28:56.436373 kubelet[2886]: E0114 13:28:56.434988 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:56.672000 audit[5159]: USER_ACCT pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:56.679648 sshd[5159]: Accepted publickey for core from 10.0.0.1 port 36366 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:28:56.686013 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:28:56.760378 kernel: audit: type=1101 audit(1768397336.672:740): pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:56.782306 kernel: audit: type=1103 audit(1768397336.681:741): pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:56.681000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:56.816404 systemd-logind[1630]: New session 11 of user core. Jan 14 13:28:56.849501 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 13:28:56.923062 kernel: audit: type=1006 audit(1768397336.681:742): pid=5159 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 13:28:56.923449 kernel: audit: type=1300 audit(1768397336.681:742): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff12eb0dc0 a2=3 a3=0 items=0 ppid=1 pid=5159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:56.681000 audit[5159]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff12eb0dc0 a2=3 a3=0 items=0 ppid=1 pid=5159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:28:56.681000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:28:57.069495 kernel: audit: type=1327 audit(1768397336.681:742): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:28:57.179637 kernel: audit: type=1105 audit(1768397336.930:743): pid=5159 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:56.930000 audit[5159]: USER_START pid=5159 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:56.936000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:57.258878 kernel: audit: type=1103 audit(1768397336.936:744): pid=5171 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:57.887505 kubelet[2886]: E0114 13:28:57.886438 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:28:57.892639 kubelet[2886]: E0114 13:28:57.889599 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:28:58.100448 sshd[5171]: Connection closed by 10.0.0.1 port 36366 Jan 14 13:28:58.104622 sshd-session[5159]: pam_unix(sshd:session): session closed for user core Jan 14 13:28:58.203460 kernel: audit: type=1106 audit(1768397338.120:745): pid=5159 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:58.120000 audit[5159]: USER_END pid=5159 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:58.197028 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:36366.service: Deactivated successfully. Jan 14 13:28:58.201934 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 13:28:58.120000 audit[5159]: CRED_DISP pid=5159 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:58.215449 systemd-logind[1630]: Session 11 logged out. Waiting for processes to exit. Jan 14 13:28:58.224437 systemd-logind[1630]: Removed session 11. Jan 14 13:28:58.269550 kernel: audit: type=1104 audit(1768397338.120:746): pid=5159 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:28:58.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.26:22-10.0.0.1:36366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:28:59.888458 kubelet[2886]: E0114 13:28:59.887568 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:29:00.894406 kubelet[2886]: E0114 13:29:00.894361 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:29:00.906034 kubelet[2886]: E0114 13:29:00.905617 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:29:02.885450 kubelet[2886]: E0114 13:29:02.885225 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:29:03.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.26:22-10.0.0.1:36374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:03.163461 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:36374.service - OpenSSH per-connection server daemon (10.0.0.1:36374). Jan 14 13:29:03.181558 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:29:03.183494 kernel: audit: type=1130 audit(1768397343.164:748): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.26:22-10.0.0.1:36374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:03.824053 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 36374 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:29:03.819000 audit[5191]: USER_ACCT pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:03.848526 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:29:03.903275 systemd-logind[1630]: New session 12 of user core. Jan 14 13:29:03.916434 kernel: audit: type=1101 audit(1768397343.819:749): pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:03.837000 audit[5191]: CRED_ACQ pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:03.985361 kernel: audit: type=1103 audit(1768397343.837:750): pid=5191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:03.990018 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 13:29:04.037366 kernel: audit: type=1006 audit(1768397343.839:751): pid=5191 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 13:29:03.839000 audit[5191]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed53fa690 a2=3 a3=0 items=0 ppid=1 pid=5191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:04.157500 kernel: audit: type=1300 audit(1768397343.839:751): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed53fa690 a2=3 a3=0 items=0 ppid=1 pid=5191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:03.839000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:04.216587 kernel: audit: type=1327 audit(1768397343.839:751): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:04.323364 kernel: audit: type=1105 audit(1768397344.036:752): pid=5191 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:04.036000 audit[5191]: USER_START pid=5191 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:04.439713 kernel: audit: type=1103 audit(1768397344.043:753): pid=5195 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:04.043000 audit[5195]: CRED_ACQ pid=5195 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:04.980471 sshd[5195]: Connection closed by 10.0.0.1 port 36374 Jan 14 13:29:04.981693 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Jan 14 13:29:05.003000 audit[5191]: USER_END pid=5191 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:05.012510 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:36374.service: Deactivated successfully. Jan 14 13:29:05.025596 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 13:29:05.030728 systemd-logind[1630]: Session 12 logged out. Waiting for processes to exit. Jan 14 13:29:05.034958 systemd-logind[1630]: Removed session 12. Jan 14 13:29:05.097374 kernel: audit: type=1106 audit(1768397345.003:754): pid=5191 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:05.097501 kernel: audit: type=1104 audit(1768397345.003:755): pid=5191 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:05.003000 audit[5191]: CRED_DISP pid=5191 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:05.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.26:22-10.0.0.1:36374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:05.885287 containerd[1649]: time="2026-01-14T13:29:05.882598121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:29:06.073399 containerd[1649]: time="2026-01-14T13:29:06.072648597Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:29:06.092464 containerd[1649]: time="2026-01-14T13:29:06.089472433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:29:06.092464 containerd[1649]: time="2026-01-14T13:29:06.091063135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:29:06.095153 kubelet[2886]: E0114 13:29:06.093553 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:29:06.095153 kubelet[2886]: E0114 13:29:06.093602 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:29:06.095153 kubelet[2886]: E0114 13:29:06.093733 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3119e5cb9c374c7884796c10460fa4dc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8j4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d688f66-n8bg2_calico-system(2d3c1365-6a1f-45b8-8652-2b261d46979e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:29:06.105631 containerd[1649]: time="2026-01-14T13:29:06.102965945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:29:06.224505 containerd[1649]: time="2026-01-14T13:29:06.223626887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:29:06.239675 containerd[1649]: time="2026-01-14T13:29:06.238538811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:29:06.239675 containerd[1649]: time="2026-01-14T13:29:06.238733533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:29:06.239948 kubelet[2886]: E0114 13:29:06.239687 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:29:06.239948 kubelet[2886]: E0114 13:29:06.239742 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:29:06.240061 kubelet[2886]: E0114 13:29:06.240012 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8j4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d688f66-n8bg2_calico-system(2d3c1365-6a1f-45b8-8652-2b261d46979e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:29:06.246936 kubelet[2886]: E0114 13:29:06.246725 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:29:07.897355 kubelet[2886]: E0114 13:29:07.897275 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:29:09.900613 kubelet[2886]: E0114 13:29:09.895787 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:29:10.053633 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:47790.service - OpenSSH per-connection server daemon (10.0.0.1:47790). Jan 14 13:29:10.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.26:22-10.0.0.1:47790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:10.074694 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:29:10.074780 kernel: audit: type=1130 audit(1768397350.052:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.26:22-10.0.0.1:47790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:10.324000 audit[5212]: USER_ACCT pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:10.329739 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 47790 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:29:10.340729 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:29:10.398329 systemd-logind[1630]: New session 13 of user core. Jan 14 13:29:10.419323 kernel: audit: type=1101 audit(1768397350.324:758): pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:10.334000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:10.477789 kernel: audit: type=1103 audit(1768397350.334:759): pid=5212 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:10.489706 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 13:29:10.512306 kernel: audit: type=1006 audit(1768397350.334:760): pid=5212 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 13:29:10.512595 kernel: audit: type=1300 audit(1768397350.334:760): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7357b8d0 a2=3 a3=0 items=0 ppid=1 pid=5212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:10.334000 audit[5212]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7357b8d0 a2=3 a3=0 items=0 ppid=1 pid=5212 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:10.334000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:10.567219 kernel: audit: type=1327 audit(1768397350.334:760): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:10.567354 kernel: audit: type=1105 audit(1768397350.515:761): pid=5212 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:10.515000 audit[5212]: USER_START pid=5212 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:10.528000 audit[5218]: CRED_ACQ pid=5218 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:10.638067 kernel: audit: type=1103 audit(1768397350.528:762): pid=5218 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:10.879223 sshd[5218]: Connection closed by 10.0.0.1 port 47790 Jan 14 13:29:10.881340 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Jan 14 13:29:10.883000 audit[5212]: USER_END pid=5212 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:10.892390 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:47790.service: Deactivated successfully. Jan 14 13:29:10.897684 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 13:29:10.903989 systemd-logind[1630]: Session 13 logged out. Waiting for processes to exit. Jan 14 13:29:10.906290 systemd-logind[1630]: Removed session 13. Jan 14 13:29:10.930236 kernel: audit: type=1106 audit(1768397350.883:763): pid=5212 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:10.884000 audit[5212]: CRED_DISP pid=5212 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:10.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.26:22-10.0.0.1:47790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:10.962530 kernel: audit: type=1104 audit(1768397350.884:764): pid=5212 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:11.891610 containerd[1649]: time="2026-01-14T13:29:11.891066214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:29:11.995579 containerd[1649]: time="2026-01-14T13:29:11.993524752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:29:12.001646 containerd[1649]: time="2026-01-14T13:29:12.001344357Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:29:12.001646 containerd[1649]: time="2026-01-14T13:29:12.001448091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:29:12.003611 kubelet[2886]: E0114 13:29:12.002769 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:29:12.003611 kubelet[2886]: E0114 13:29:12.002828 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:29:12.005358 kubelet[2886]: E0114 13:29:12.004798 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjvdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c96748b8f-wwf76_calico-system(1356d1d1-69e1-470e-955d-5a3a9ab090a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:29:12.009260 kubelet[2886]: E0114 13:29:12.006504 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:29:12.009332 containerd[1649]: time="2026-01-14T13:29:12.006457928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:29:12.107657 containerd[1649]: time="2026-01-14T13:29:12.102310883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:29:12.109041 containerd[1649]: time="2026-01-14T13:29:12.108795831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:29:12.109041 containerd[1649]: time="2026-01-14T13:29:12.108980123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:29:12.113422 kubelet[2886]: E0114 13:29:12.110335 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:29:12.113422 kubelet[2886]: E0114 13:29:12.110735 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:29:12.113422 kubelet[2886]: E0114 13:29:12.111482 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btjwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b6f8f57b-4vsgx_calico-apiserver(43c81015-17c1-4886-ba54-03a8237f3050): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:29:12.115819 kubelet[2886]: E0114 13:29:12.115357 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:29:12.877000 containerd[1649]: time="2026-01-14T13:29:12.875765431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:29:12.999222 containerd[1649]: time="2026-01-14T13:29:12.997401529Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:29:13.012993 containerd[1649]: time="2026-01-14T13:29:13.012834628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:29:13.015761 containerd[1649]: time="2026-01-14T13:29:13.013362411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:29:13.018587 kubelet[2886]: E0114 13:29:13.017684 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:29:13.018587 kubelet[2886]: E0114 13:29:13.017739 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:29:13.021435 kubelet[2886]: E0114 13:29:13.019378 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzzxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h2gf2_calico-system(97139d64-ebd5-495e-81ad-3f4aa4c54bfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:29:13.021435 kubelet[2886]: E0114 13:29:13.021057 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:29:15.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.26:22-10.0.0.1:54654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:15.919757 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:54654.service - OpenSSH per-connection server daemon (10.0.0.1:54654). Jan 14 13:29:15.928742 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:29:15.928822 kernel: audit: type=1130 audit(1768397355.918:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.26:22-10.0.0.1:54654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:16.084000 audit[5237]: USER_ACCT pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.087723 sshd[5237]: Accepted publickey for core from 10.0.0.1 port 54654 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:29:16.091707 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:29:16.107045 systemd-logind[1630]: New session 14 of user core. Jan 14 13:29:16.141651 kernel: audit: type=1101 audit(1768397356.084:767): pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.088000 audit[5237]: CRED_ACQ pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.180846 kernel: audit: type=1103 audit(1768397356.088:768): pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.181071 kernel: audit: type=1006 audit(1768397356.088:769): pid=5237 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 13:29:16.179306 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 13:29:16.088000 audit[5237]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce4d53820 a2=3 a3=0 items=0 ppid=1 pid=5237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:16.245258 kernel: audit: type=1300 audit(1768397356.088:769): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce4d53820 a2=3 a3=0 items=0 ppid=1 pid=5237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:16.245390 kernel: audit: type=1327 audit(1768397356.088:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:16.088000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:16.185000 audit[5237]: USER_START pid=5237 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.301458 kernel: audit: type=1105 audit(1768397356.185:770): pid=5237 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.301985 kernel: audit: type=1103 audit(1768397356.190:771): pid=5243 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.190000 audit[5243]: CRED_ACQ pid=5243 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.546998 sshd[5243]: Connection closed by 10.0.0.1 port 54654 Jan 14 13:29:16.547755 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Jan 14 13:29:16.550000 audit[5237]: USER_END pid=5237 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.561995 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:54654.service: Deactivated successfully. Jan 14 13:29:16.563628 systemd-logind[1630]: Session 14 logged out. Waiting for processes to exit. Jan 14 13:29:16.573706 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 13:29:16.587323 systemd-logind[1630]: Removed session 14. Jan 14 13:29:16.599553 kernel: audit: type=1106 audit(1768397356.550:772): pid=5237 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.551000 audit[5237]: CRED_DISP pid=5237 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.636450 kernel: audit: type=1104 audit(1768397356.551:773): pid=5237 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:16.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.26:22-10.0.0.1:54654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:20.881508 kubelet[2886]: E0114 13:29:20.880493 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:29:21.578277 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:29:21.578411 kernel: audit: type=1130 audit(1768397361.568:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.26:22-10.0.0.1:54664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:21.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.26:22-10.0.0.1:54664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:21.569047 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:54664.service - OpenSSH per-connection server daemon (10.0.0.1:54664). Jan 14 13:29:21.723000 audit[5259]: USER_ACCT pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:21.731537 sshd[5259]: Accepted publickey for core from 10.0.0.1 port 54664 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:29:21.732029 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:29:21.747807 systemd-logind[1630]: New session 15 of user core. Jan 14 13:29:21.763709 kernel: audit: type=1101 audit(1768397361.723:776): pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:21.763780 kernel: audit: type=1103 audit(1768397361.725:777): pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:21.725000 audit[5259]: CRED_ACQ pid=5259 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:21.812503 kernel: audit: type=1006 audit(1768397361.725:778): pid=5259 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 13:29:21.812643 kernel: audit: type=1300 audit(1768397361.725:778): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4d6b9af0 a2=3 a3=0 items=0 ppid=1 pid=5259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:21.725000 audit[5259]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4d6b9af0 a2=3 a3=0 items=0 ppid=1 pid=5259 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:21.845410 kernel: audit: type=1327 audit(1768397361.725:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:21.725000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:21.862355 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 13:29:21.867000 audit[5259]: USER_START pid=5259 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:21.874293 kubelet[2886]: E0114 13:29:21.872799 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:29:21.888813 containerd[1649]: time="2026-01-14T13:29:21.888747387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:29:21.915537 kernel: audit: type=1105 audit(1768397361.867:779): pid=5259 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:21.872000 audit[5263]: CRED_ACQ pid=5263 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:21.948328 kernel: audit: type=1103 audit(1768397361.872:780): pid=5263 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.025987 containerd[1649]: time="2026-01-14T13:29:22.025229276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:29:22.028420 containerd[1649]: time="2026-01-14T13:29:22.028313486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:29:22.028420 containerd[1649]: time="2026-01-14T13:29:22.028392654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:29:22.028795 kubelet[2886]: E0114 13:29:22.028755 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:29:22.030201 kubelet[2886]: E0114 13:29:22.029525 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:29:22.030201 kubelet[2886]: E0114 13:29:22.029678 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbd5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b6f8f57b-kb2gl_calico-apiserver(fc822dd2-4a0b-4df8-969d-8ce5598b7069): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:29:22.030879 kubelet[2886]: E0114 13:29:22.030754 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:29:22.083356 sshd[5263]: Connection closed by 10.0.0.1 port 54664 Jan 14 13:29:22.081664 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Jan 14 13:29:22.084000 audit[5259]: USER_END pid=5259 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.085000 audit[5259]: CRED_DISP pid=5259 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.163435 kernel: audit: type=1106 audit(1768397362.084:781): pid=5259 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.163580 kernel: audit: type=1104 audit(1768397362.085:782): pid=5259 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.171411 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:54664.service: Deactivated successfully. Jan 14 13:29:22.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.26:22-10.0.0.1:54664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:22.176764 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 13:29:22.182049 systemd-logind[1630]: Session 15 logged out. Waiting for processes to exit. Jan 14 13:29:22.186462 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:54672.service - OpenSSH per-connection server daemon (10.0.0.1:54672). Jan 14 13:29:22.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.26:22-10.0.0.1:54672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:22.190597 systemd-logind[1630]: Removed session 15. Jan 14 13:29:22.302000 audit[5277]: USER_ACCT pid=5277 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.304004 sshd[5277]: Accepted publickey for core from 10.0.0.1 port 54672 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:29:22.306000 audit[5277]: CRED_ACQ pid=5277 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.307000 audit[5277]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc0e441d0 a2=3 a3=0 items=0 ppid=1 pid=5277 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:22.307000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:22.310884 sshd-session[5277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:29:22.327678 systemd-logind[1630]: New session 16 of user core. Jan 14 13:29:22.337842 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 13:29:22.345000 audit[5277]: USER_START pid=5277 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.350000 audit[5281]: CRED_ACQ pid=5281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.577633 sshd[5281]: Connection closed by 10.0.0.1 port 54672 Jan 14 13:29:22.579674 sshd-session[5277]: pam_unix(sshd:session): session closed for user core Jan 14 13:29:22.584000 audit[5277]: USER_END pid=5277 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.584000 audit[5277]: CRED_DISP pid=5277 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.26:22-10.0.0.1:54672 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:22.598744 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:54672.service: Deactivated successfully. Jan 14 13:29:22.607560 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 13:29:22.610458 systemd-logind[1630]: Session 16 logged out. Waiting for processes to exit. Jan 14 13:29:22.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.26:22-10.0.0.1:54684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:22.619538 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:54684.service - OpenSSH per-connection server daemon (10.0.0.1:54684). Jan 14 13:29:22.622061 systemd-logind[1630]: Removed session 16. Jan 14 13:29:22.744000 audit[5293]: USER_ACCT pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.746328 sshd[5293]: Accepted publickey for core from 10.0.0.1 port 54684 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:29:22.747000 audit[5293]: CRED_ACQ pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.747000 audit[5293]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5aeff580 a2=3 a3=0 items=0 ppid=1 pid=5293 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:22.747000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:22.750832 sshd-session[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:29:22.767513 systemd-logind[1630]: New session 17 of user core. Jan 14 13:29:22.778847 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 13:29:22.785000 audit[5293]: USER_START pid=5293 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.789000 audit[5297]: CRED_ACQ pid=5297 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:22.878325 kubelet[2886]: E0114 13:29:22.878005 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:29:22.886580 containerd[1649]: time="2026-01-14T13:29:22.885875088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:29:22.985050 containerd[1649]: time="2026-01-14T13:29:22.984889206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:29:22.989276 containerd[1649]: time="2026-01-14T13:29:22.989072188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:29:22.989438 containerd[1649]: time="2026-01-14T13:29:22.989368380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:29:22.989710 kubelet[2886]: E0114 13:29:22.989683 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:29:22.989802 kubelet[2886]: E0114 13:29:22.989785 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:29:22.990347 kubelet[2886]: E0114 13:29:22.990308 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4xm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:29:22.997476 sshd[5297]: Connection closed by 10.0.0.1 port 54684 Jan 14 13:29:22.997434 sshd-session[5293]: pam_unix(sshd:session): session closed for user core Jan 14 13:29:23.001880 containerd[1649]: time="2026-01-14T13:29:23.001851669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:29:23.006000 audit[5293]: USER_END pid=5293 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:23.007000 audit[5293]: CRED_DISP pid=5293 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:23.018526 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:54684.service: Deactivated successfully. Jan 14 13:29:23.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.26:22-10.0.0.1:54684 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:23.025580 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 13:29:23.029516 systemd-logind[1630]: Session 17 logged out. Waiting for processes to exit. Jan 14 13:29:23.034423 systemd-logind[1630]: Removed session 17. Jan 14 13:29:23.098517 containerd[1649]: time="2026-01-14T13:29:23.098469358Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:29:23.104233 containerd[1649]: time="2026-01-14T13:29:23.102761105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:29:23.104233 containerd[1649]: time="2026-01-14T13:29:23.102864728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:29:23.104352 kubelet[2886]: E0114 13:29:23.103382 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:29:23.104352 kubelet[2886]: E0114 13:29:23.103433 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:29:23.104352 kubelet[2886]: E0114 13:29:23.103572 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4xm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:29:23.106501 kubelet[2886]: E0114 13:29:23.105310 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:29:23.880521 kubelet[2886]: E0114 13:29:23.878451 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:29:25.879545 kubelet[2886]: E0114 13:29:25.879042 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:29:28.024721 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:51580.service - OpenSSH per-connection server daemon (10.0.0.1:51580). Jan 14 13:29:28.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.26:22-10.0.0.1:51580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:28.033388 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 13:29:28.033436 kernel: audit: type=1130 audit(1768397368.023:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.26:22-10.0.0.1:51580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:28.158000 audit[5335]: USER_ACCT pid=5335 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.160720 sshd[5335]: Accepted publickey for core from 10.0.0.1 port 51580 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:29:28.164817 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:29:28.178606 systemd-logind[1630]: New session 18 of user core. Jan 14 13:29:28.159000 audit[5335]: CRED_ACQ pid=5335 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.229663 kernel: audit: type=1101 audit(1768397368.158:803): pid=5335 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.229752 kernel: audit: type=1103 audit(1768397368.159:804): pid=5335 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.251815 kernel: audit: type=1006 audit(1768397368.159:805): pid=5335 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 13:29:28.251876 kernel: audit: type=1300 audit(1768397368.159:805): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd94d75860 a2=3 a3=0 items=0 ppid=1 pid=5335 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:28.159000 audit[5335]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd94d75860 a2=3 a3=0 items=0 ppid=1 pid=5335 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:28.254278 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 13:29:28.292531 kernel: audit: type=1327 audit(1768397368.159:805): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:28.159000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:28.306974 kernel: audit: type=1105 audit(1768397368.261:806): pid=5335 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.261000 audit[5335]: USER_START pid=5335 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.266000 audit[5340]: CRED_ACQ pid=5340 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.383493 kernel: audit: type=1103 audit(1768397368.266:807): pid=5340 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.494831 sshd[5340]: Connection closed by 10.0.0.1 port 51580 Jan 14 13:29:28.498521 sshd-session[5335]: pam_unix(sshd:session): session closed for user core Jan 14 13:29:28.502000 audit[5335]: USER_END pid=5335 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.510067 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:51580.service: Deactivated successfully. Jan 14 13:29:28.519511 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 13:29:28.525049 systemd-logind[1630]: Session 18 logged out. Waiting for processes to exit. Jan 14 13:29:28.530013 systemd-logind[1630]: Removed session 18. Jan 14 13:29:28.502000 audit[5335]: CRED_DISP pid=5335 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.568998 kernel: audit: type=1106 audit(1768397368.502:808): pid=5335 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.569245 kernel: audit: type=1104 audit(1768397368.502:809): pid=5335 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:28.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.26:22-10.0.0.1:51580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:32.883805 kubelet[2886]: E0114 13:29:32.883340 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:29:33.533286 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:51594.service - OpenSSH per-connection server daemon (10.0.0.1:51594). Jan 14 13:29:33.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.26:22-10.0.0.1:51594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:33.543455 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:29:33.543539 kernel: audit: type=1130 audit(1768397373.532:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.26:22-10.0.0.1:51594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:33.665215 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 51594 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:29:33.663000 audit[5354]: USER_ACCT pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:33.668758 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:29:33.686590 systemd-logind[1630]: New session 19 of user core. Jan 14 13:29:33.665000 audit[5354]: CRED_ACQ pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:33.737010 kernel: audit: type=1101 audit(1768397373.663:812): pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:33.737363 kernel: audit: type=1103 audit(1768397373.665:813): pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:33.737397 kernel: audit: type=1006 audit(1768397373.665:814): pid=5354 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 13:29:33.665000 audit[5354]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6d9f1510 a2=3 a3=0 items=0 ppid=1 pid=5354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:33.797812 kernel: audit: type=1300 audit(1768397373.665:814): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6d9f1510 a2=3 a3=0 items=0 ppid=1 pid=5354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:33.798015 kernel: audit: type=1327 audit(1768397373.665:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:33.665000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:33.816685 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 13:29:33.824000 audit[5354]: USER_START pid=5354 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:33.829000 audit[5358]: CRED_ACQ pid=5358 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:33.885638 kubelet[2886]: E0114 13:29:33.885445 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:29:33.901320 kernel: audit: type=1105 audit(1768397373.824:815): pid=5354 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:33.901492 kernel: audit: type=1103 audit(1768397373.829:816): pid=5358 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:34.042352 sshd[5358]: Connection closed by 10.0.0.1 port 51594 Jan 14 13:29:34.043722 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Jan 14 13:29:34.046000 audit[5354]: USER_END pid=5354 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:34.055069 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:51594.service: Deactivated successfully. Jan 14 13:29:34.059561 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 13:29:34.063290 systemd-logind[1630]: Session 19 logged out. Waiting for processes to exit. Jan 14 13:29:34.066468 systemd-logind[1630]: Removed session 19. Jan 14 13:29:34.098614 kernel: audit: type=1106 audit(1768397374.046:817): pid=5354 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:34.098704 kernel: audit: type=1104 audit(1768397374.047:818): pid=5354 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:34.047000 audit[5354]: CRED_DISP pid=5354 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:34.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.26:22-10.0.0.1:51594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:34.875420 kubelet[2886]: E0114 13:29:34.875043 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:29:36.877787 kubelet[2886]: E0114 13:29:36.877410 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:29:37.873823 kubelet[2886]: E0114 13:29:37.873252 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:29:37.877880 kubelet[2886]: E0114 13:29:37.877844 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:29:39.062736 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:53130.service - OpenSSH per-connection server daemon (10.0.0.1:53130). Jan 14 13:29:39.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.26:22-10.0.0.1:53130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:39.071869 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:29:39.072010 kernel: audit: type=1130 audit(1768397379.061:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.26:22-10.0.0.1:53130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:39.202000 audit[5379]: USER_ACCT pid=5379 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.204844 sshd[5379]: Accepted publickey for core from 10.0.0.1 port 53130 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:29:39.208824 sshd-session[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:29:39.221800 systemd-logind[1630]: New session 20 of user core. Jan 14 13:29:39.205000 audit[5379]: CRED_ACQ pid=5379 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.286431 kernel: audit: type=1101 audit(1768397379.202:821): pid=5379 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.286547 kernel: audit: type=1103 audit(1768397379.205:822): pid=5379 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.286572 kernel: audit: type=1006 audit(1768397379.205:823): pid=5379 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 13:29:39.205000 audit[5379]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff25bf38c0 a2=3 a3=0 items=0 ppid=1 pid=5379 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:39.352029 kernel: audit: type=1300 audit(1768397379.205:823): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff25bf38c0 a2=3 a3=0 items=0 ppid=1 pid=5379 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:39.352342 kernel: audit: type=1327 audit(1768397379.205:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:39.205000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:39.369549 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 13:29:39.378000 audit[5379]: USER_START pid=5379 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.426373 kernel: audit: type=1105 audit(1768397379.378:824): pid=5379 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.426498 kernel: audit: type=1103 audit(1768397379.382:825): pid=5383 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.382000 audit[5383]: CRED_ACQ pid=5383 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.646843 sshd[5383]: Connection closed by 10.0.0.1 port 53130 Jan 14 13:29:39.647387 sshd-session[5379]: pam_unix(sshd:session): session closed for user core Jan 14 13:29:39.649000 audit[5379]: USER_END pid=5379 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.654632 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:53130.service: Deactivated successfully. Jan 14 13:29:39.658796 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 13:29:39.661787 systemd-logind[1630]: Session 20 logged out. Waiting for processes to exit. Jan 14 13:29:39.667748 systemd-logind[1630]: Removed session 20. Jan 14 13:29:39.697369 kernel: audit: type=1106 audit(1768397379.649:826): pid=5379 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.697469 kernel: audit: type=1104 audit(1768397379.650:827): pid=5379 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.650000 audit[5379]: CRED_DISP pid=5379 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:39.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.26:22-10.0.0.1:53130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:43.877713 kubelet[2886]: E0114 13:29:43.877414 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:29:44.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.26:22-10.0.0.1:42796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:44.664469 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:42796.service - OpenSSH per-connection server daemon (10.0.0.1:42796). Jan 14 13:29:44.671398 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:29:44.671835 kernel: audit: type=1130 audit(1768397384.663:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.26:22-10.0.0.1:42796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:44.800000 audit[5399]: USER_ACCT pid=5399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:44.811677 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 42796 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:29:44.815730 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:29:44.829401 systemd-logind[1630]: New session 21 of user core. Jan 14 13:29:44.810000 audit[5399]: CRED_ACQ pid=5399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:44.868383 kernel: audit: type=1101 audit(1768397384.800:830): pid=5399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:44.868436 kernel: audit: type=1103 audit(1768397384.810:831): pid=5399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:44.869429 kernel: audit: type=1006 audit(1768397384.810:832): pid=5399 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 13:29:44.810000 audit[5399]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe76869320 a2=3 a3=0 items=0 ppid=1 pid=5399 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:44.917597 kernel: audit: type=1300 audit(1768397384.810:832): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe76869320 a2=3 a3=0 items=0 ppid=1 pid=5399 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:44.810000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:44.918495 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 13:29:44.931370 kernel: audit: type=1327 audit(1768397384.810:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:44.931409 kernel: audit: type=1105 audit(1768397384.925:833): pid=5399 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:44.925000 audit[5399]: USER_START pid=5399 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:44.968283 kernel: audit: type=1103 audit(1768397384.929:834): pid=5403 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:44.929000 audit[5403]: CRED_ACQ pid=5403 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:45.106012 sshd[5403]: Connection closed by 10.0.0.1 port 42796 Jan 14 13:29:45.106522 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Jan 14 13:29:45.108000 audit[5399]: USER_END pid=5399 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:45.113518 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:42796.service: Deactivated successfully. Jan 14 13:29:45.117064 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 13:29:45.121564 systemd-logind[1630]: Session 21 logged out. Waiting for processes to exit. Jan 14 13:29:45.125651 systemd-logind[1630]: Removed session 21. Jan 14 13:29:45.109000 audit[5399]: CRED_DISP pid=5399 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:45.146391 kernel: audit: type=1106 audit(1768397385.108:835): pid=5399 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:45.146436 kernel: audit: type=1104 audit(1768397385.109:836): pid=5399 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:45.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.26:22-10.0.0.1:42796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:46.875216 kubelet[2886]: E0114 13:29:46.874827 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:29:47.878018 kubelet[2886]: E0114 13:29:47.877268 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:29:47.879564 kubelet[2886]: E0114 13:29:47.879423 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:29:50.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.26:22-10.0.0.1:42806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:50.125586 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:42806.service - OpenSSH per-connection server daemon (10.0.0.1:42806). Jan 14 13:29:50.132594 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:29:50.132629 kernel: audit: type=1130 audit(1768397390.124:838): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.26:22-10.0.0.1:42806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:50.305000 audit[5416]: USER_ACCT pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.306994 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 42806 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:29:50.310647 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:29:50.323978 systemd-logind[1630]: New session 22 of user core. Jan 14 13:29:50.341323 kernel: audit: type=1101 audit(1768397390.305:839): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.341406 kernel: audit: type=1103 audit(1768397390.307:840): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.307000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.374794 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 13:29:50.389487 kernel: audit: type=1006 audit(1768397390.307:841): pid=5416 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 13:29:50.389543 kernel: audit: type=1300 audit(1768397390.307:841): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcdae1480 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:50.307000 audit[5416]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcdae1480 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:29:50.422422 kernel: audit: type=1327 audit(1768397390.307:841): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:50.307000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:29:50.382000 audit[5416]: USER_START pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.470299 kernel: audit: type=1105 audit(1768397390.382:842): pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.386000 audit[5420]: CRED_ACQ pid=5420 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.497052 kernel: audit: type=1103 audit(1768397390.386:843): pid=5420 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.610465 sshd[5420]: Connection closed by 10.0.0.1 port 42806 Jan 14 13:29:50.611008 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Jan 14 13:29:50.613000 audit[5416]: USER_END pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.618371 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:42806.service: Deactivated successfully. Jan 14 13:29:50.622037 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 13:29:50.623791 systemd-logind[1630]: Session 22 logged out. Waiting for processes to exit. Jan 14 13:29:50.627567 systemd-logind[1630]: Removed session 22. Jan 14 13:29:50.613000 audit[5416]: CRED_DISP pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.689638 kernel: audit: type=1106 audit(1768397390.613:844): pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.689710 kernel: audit: type=1104 audit(1768397390.613:845): pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:29:50.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.26:22-10.0.0.1:42806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:51.873834 kubelet[2886]: E0114 13:29:51.873388 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:29:52.874848 kubelet[2886]: E0114 13:29:52.874355 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:29:55.630906 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:49336.service - OpenSSH per-connection server daemon (10.0.0.1:49336). Jan 14 13:29:55.638254 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:29:55.638335 kernel: audit: type=1130 audit(1768397395.630:847): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.26:22-10.0.0.1:49336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:55.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.26:22-10.0.0.1:49336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:29:56.871706 kubelet[2886]: E0114 13:29:56.871626 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:29:57.874442 containerd[1649]: time="2026-01-14T13:29:57.874291712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:29:59.876525 kubelet[2886]: E0114 13:29:59.876471 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:30:01.873639 kubelet[2886]: E0114 13:30:01.873515 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:02.871893 kubelet[2886]: E0114 13:30:02.871821 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:04.404598 containerd[1649]: time="2026-01-14T13:30:04.404478562Z" level=error msg="ExecSync for \"9c314e8b326d16679d631da62268a8745773edaee57ab9f8f4cbe6f83a3ab7a6\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jan 14 13:30:04.405602 kubelet[2886]: E0114 13:30:04.404880 2886 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="9c314e8b326d16679d631da62268a8745773edaee57ab9f8f4cbe6f83a3ab7a6" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 14 13:30:11.425439 kubelet[2886]: E0114 13:30:11.424509 2886 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 14 13:30:11.433599 kubelet[2886]: E0114 13:30:11.432816 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:11.462473 kubelet[2886]: E0114 13:30:11.459566 2886 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{goldmane-666569f655-h2gf2.188a9bfc18ca4b35 calico-system 1451 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-h2gf2,UID:97139d64-ebd5-495e-81ad-3f4aa4c54bfd,APIVersion:v1,ResourceVersion:834,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 13:28:30 +0000 UTC,LastTimestamp:2026-01-14 13:29:52.874027946 +0000 UTC m=+139.311649354,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 13:30:11.468000 audit: BPF prog-id=259 op=LOAD Jan 14 13:30:11.486667 kernel: audit: type=1334 audit(1768397411.468:848): prog-id=259 op=LOAD Jan 14 13:30:11.467602 systemd[1]: cri-containerd-75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d.scope: Deactivated successfully. Jan 14 13:30:11.487337 kubelet[2886]: E0114 13:30:11.485243 2886 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Jan 14 13:30:11.468395 systemd[1]: cri-containerd-75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d.scope: Consumed 11.342s CPU time, 74.8M memory peak, 9.4M read from disk. Jan 14 13:30:11.468000 audit: BPF prog-id=86 op=UNLOAD Jan 14 13:30:11.516833 kernel: audit: type=1334 audit(1768397411.468:849): prog-id=86 op=UNLOAD Jan 14 13:30:11.516921 containerd[1649]: time="2026-01-14T13:30:11.505292990Z" level=info msg="received container exit event container_id:\"75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d\" id:\"75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d\" pid:2710 exit_status:1 exited_at:{seconds:1768397411 nanos:500667649}" Jan 14 13:30:11.484000 audit: BPF prog-id=101 op=UNLOAD Jan 14 13:30:11.538322 kernel: audit: type=1334 audit(1768397411.484:850): prog-id=101 op=UNLOAD Jan 14 13:30:11.484000 audit: BPF prog-id=105 op=UNLOAD Jan 14 13:30:11.552386 kernel: audit: type=1334 audit(1768397411.484:851): prog-id=105 op=UNLOAD Jan 14 13:30:11.568397 systemd[1]: cri-containerd-0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59.scope: Deactivated successfully. Jan 14 13:30:11.570000 audit: BPF prog-id=260 op=LOAD Jan 14 13:30:11.568958 systemd[1]: cri-containerd-0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59.scope: Consumed 4.610s CPU time, 27.1M memory peak, 7.3M read from disk. Jan 14 13:30:11.571738 systemd[1]: cri-containerd-5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64.scope: Deactivated successfully. Jan 14 13:30:11.572942 systemd[1]: cri-containerd-5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64.scope: Consumed 16.703s CPU time, 86.8M memory peak, 14.7M read from disk. Jan 14 13:30:11.590705 kernel: audit: type=1334 audit(1768397411.570:852): prog-id=260 op=LOAD Jan 14 13:30:11.570000 audit: BPF prog-id=95 op=UNLOAD Jan 14 13:30:11.606270 kernel: audit: type=1334 audit(1768397411.570:853): prog-id=95 op=UNLOAD Jan 14 13:30:11.571000 audit: BPF prog-id=110 op=UNLOAD Jan 14 13:30:11.609556 containerd[1649]: time="2026-01-14T13:30:11.609326625Z" level=info msg="received container exit event container_id:\"5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64\" id:\"5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64\" pid:3222 exit_status:1 exited_at:{seconds:1768397411 nanos:596344727}" Jan 14 13:30:11.609670 containerd[1649]: time="2026-01-14T13:30:11.609649032Z" level=info msg="received container exit event container_id:\"0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59\" id:\"0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59\" pid:2730 exit_status:1 exited_at:{seconds:1768397411 nanos:593846832}" Jan 14 13:30:11.618544 kernel: audit: type=1334 audit(1768397411.571:854): prog-id=110 op=UNLOAD Jan 14 13:30:11.571000 audit: BPF prog-id=115 op=UNLOAD Jan 14 13:30:11.634837 kernel: audit: type=1334 audit(1768397411.571:855): prog-id=115 op=UNLOAD Jan 14 13:30:11.575000 audit: BPF prog-id=149 op=UNLOAD Jan 14 13:30:11.642406 containerd[1649]: time="2026-01-14T13:30:11.641852918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:30:11.646504 kernel: audit: type=1334 audit(1768397411.575:856): prog-id=149 op=UNLOAD Jan 14 13:30:11.646565 kernel: audit: type=1334 audit(1768397411.575:857): prog-id=153 op=UNLOAD Jan 14 13:30:11.575000 audit: BPF prog-id=153 op=UNLOAD Jan 14 13:30:11.664558 containerd[1649]: time="2026-01-14T13:30:11.664364745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:30:11.664558 containerd[1649]: time="2026-01-14T13:30:11.664496931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:30:11.667350 kubelet[2886]: E0114 13:30:11.666306 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:30:11.667350 kubelet[2886]: E0114 13:30:11.666426 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:30:11.667350 kubelet[2886]: E0114 13:30:11.666586 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3119e5cb9c374c7884796c10460fa4dc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8j4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d688f66-n8bg2_calico-system(2d3c1365-6a1f-45b8-8652-2b261d46979e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:30:11.693610 containerd[1649]: time="2026-01-14T13:30:11.692914293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:30:11.779000 audit[5461]: USER_ACCT pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:11.782000 audit[5461]: CRED_ACQ pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:11.783000 audit[5461]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7bbd26e0 a2=3 a3=0 items=0 ppid=1 pid=5461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:11.783000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:11.782653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64-rootfs.mount: Deactivated successfully. Jan 14 13:30:11.794639 sshd[5461]: Accepted publickey for core from 10.0.0.1 port 49336 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:11.785968 sshd-session[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:11.795365 containerd[1649]: time="2026-01-14T13:30:11.792502690Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:30:11.804558 containerd[1649]: time="2026-01-14T13:30:11.802296722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:30:11.804558 containerd[1649]: time="2026-01-14T13:30:11.802402959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:30:11.804558 containerd[1649]: time="2026-01-14T13:30:11.803534117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:30:11.804704 kubelet[2886]: E0114 13:30:11.802609 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:30:11.804704 kubelet[2886]: E0114 13:30:11.802664 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:30:11.804704 kubelet[2886]: E0114 13:30:11.802908 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjvdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c96748b8f-wwf76_calico-system(1356d1d1-69e1-470e-955d-5a3a9ab090a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:30:11.807597 kubelet[2886]: E0114 13:30:11.805510 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:30:11.809877 systemd-logind[1630]: New session 23 of user core. Jan 14 13:30:11.817419 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 13:30:11.828000 audit[5461]: USER_START pid=5461 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:11.833000 audit[5534]: CRED_ACQ pid=5534 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:11.868563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59-rootfs.mount: Deactivated successfully. Jan 14 13:30:11.885887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d-rootfs.mount: Deactivated successfully. Jan 14 13:30:11.900461 containerd[1649]: time="2026-01-14T13:30:11.900420853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:30:11.916527 containerd[1649]: time="2026-01-14T13:30:11.910944365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:30:11.916527 containerd[1649]: time="2026-01-14T13:30:11.911269528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:30:11.916634 kubelet[2886]: E0114 13:30:11.911364 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:30:11.916634 kubelet[2886]: E0114 13:30:11.911401 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:30:11.916634 kubelet[2886]: E0114 13:30:11.911575 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbd5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b6f8f57b-kb2gl_calico-apiserver(fc822dd2-4a0b-4df8-969d-8ce5598b7069): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:30:11.916634 kubelet[2886]: E0114 13:30:11.912921 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:30:11.928956 containerd[1649]: time="2026-01-14T13:30:11.928531900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:30:12.024937 containerd[1649]: time="2026-01-14T13:30:12.024515496Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:30:12.027669 kubelet[2886]: I0114 13:30:12.027429 2886 scope.go:117] "RemoveContainer" containerID="0f98620dc90d039a2318c473d80902729626f380cb13b3057ecf1adc7417ad59" Jan 14 13:30:12.027669 kubelet[2886]: E0114 13:30:12.027497 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:12.039685 containerd[1649]: time="2026-01-14T13:30:12.039322319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:30:12.039685 containerd[1649]: time="2026-01-14T13:30:12.039375839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:30:12.040849 kubelet[2886]: E0114 13:30:12.040621 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:30:12.040849 kubelet[2886]: E0114 13:30:12.040654 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:30:12.040849 kubelet[2886]: E0114 13:30:12.040788 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btjwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68b6f8f57b-4vsgx_calico-apiserver(43c81015-17c1-4886-ba54-03a8237f3050): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:30:12.041766 kubelet[2886]: I0114 13:30:12.041747 2886 scope.go:117] "RemoveContainer" containerID="75d22bcc392982034d1838254cb8f6904a8ac001882ed5fdb517d98578c2e64d" Jan 14 13:30:12.041919 containerd[1649]: time="2026-01-14T13:30:12.041898850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:30:12.050313 kubelet[2886]: E0114 13:30:12.048680 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:12.050313 kubelet[2886]: E0114 13:30:12.042343 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:30:12.050428 containerd[1649]: time="2026-01-14T13:30:12.049907214Z" level=info msg="CreateContainer within sandbox \"7e892724b5e085cbd1b0997a2863e7375b5e64e535e12d040408799d9347a5a6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 14 13:30:12.078695 containerd[1649]: time="2026-01-14T13:30:12.078597176Z" level=info msg="CreateContainer within sandbox \"6cfdd7d5bb36568756129189144acda094e395db618ecd4fea8396cf1d89ab2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 14 13:30:12.090700 kubelet[2886]: E0114 13:30:12.090678 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:12.091316 kubelet[2886]: I0114 13:30:12.091298 2886 scope.go:117] "RemoveContainer" containerID="5d6567ddb404082520b9ab00e8166185a5fa2fb82e440386067fd03689ec9d64" Jan 14 13:30:12.117321 containerd[1649]: time="2026-01-14T13:30:12.116995497Z" level=info msg="CreateContainer within sandbox \"b01bf6008318fa1d35585ac38686faec158dc1c0c71831d5e592351df85d61bd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 14 13:30:12.147886 containerd[1649]: time="2026-01-14T13:30:12.145638381Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:30:12.156721 containerd[1649]: time="2026-01-14T13:30:12.156687432Z" level=info msg="Container 69543df133a0513da3ab08f0e0134459089378d5a69c8135abd615919a4666d2: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:30:12.161412 containerd[1649]: time="2026-01-14T13:30:12.156792937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:30:12.161556 containerd[1649]: time="2026-01-14T13:30:12.156798267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:30:12.162630 containerd[1649]: time="2026-01-14T13:30:12.162589673Z" level=info msg="Container cf490e58149b72c7f7169beb16d2419d7618283c3dc34884fac707ed0e13837c: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:30:12.165576 kubelet[2886]: E0114 13:30:12.165442 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:30:12.165664 kubelet[2886]: E0114 13:30:12.165578 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:30:12.166490 kubelet[2886]: E0114 13:30:12.165760 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzzxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h2gf2_calico-system(97139d64-ebd5-495e-81ad-3f4aa4c54bfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:30:12.172434 kubelet[2886]: E0114 13:30:12.171235 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:30:12.173792 containerd[1649]: time="2026-01-14T13:30:12.173755331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:30:12.177697 containerd[1649]: time="2026-01-14T13:30:12.174331856Z" level=info msg="Container a6341229d08bd36ea48e543a531d0b2091be02f779636e16382cbaf042ae6faa: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:30:12.209295 containerd[1649]: time="2026-01-14T13:30:12.208682440Z" level=info msg="CreateContainer within sandbox \"7e892724b5e085cbd1b0997a2863e7375b5e64e535e12d040408799d9347a5a6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"69543df133a0513da3ab08f0e0134459089378d5a69c8135abd615919a4666d2\"" Jan 14 13:30:12.216402 containerd[1649]: time="2026-01-14T13:30:12.210999960Z" level=info msg="StartContainer for \"69543df133a0513da3ab08f0e0134459089378d5a69c8135abd615919a4666d2\"" Jan 14 13:30:12.219950 containerd[1649]: time="2026-01-14T13:30:12.219775212Z" level=info msg="connecting to shim 69543df133a0513da3ab08f0e0134459089378d5a69c8135abd615919a4666d2" address="unix:///run/containerd/s/bc349eb8100ae0d6ca80468bab5c08c41ea116dd50829f65a7742606d3609641" protocol=ttrpc version=3 Jan 14 13:30:12.238747 containerd[1649]: time="2026-01-14T13:30:12.238682903Z" level=info msg="CreateContainer within sandbox \"b01bf6008318fa1d35585ac38686faec158dc1c0c71831d5e592351df85d61bd\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a6341229d08bd36ea48e543a531d0b2091be02f779636e16382cbaf042ae6faa\"" Jan 14 13:30:12.240749 containerd[1649]: time="2026-01-14T13:30:12.240722103Z" level=info msg="StartContainer for \"a6341229d08bd36ea48e543a531d0b2091be02f779636e16382cbaf042ae6faa\"" Jan 14 13:30:12.248418 containerd[1649]: time="2026-01-14T13:30:12.246987108Z" level=info msg="CreateContainer within sandbox \"6cfdd7d5bb36568756129189144acda094e395db618ecd4fea8396cf1d89ab2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cf490e58149b72c7f7169beb16d2419d7618283c3dc34884fac707ed0e13837c\"" Jan 14 13:30:12.249529 containerd[1649]: time="2026-01-14T13:30:12.249507683Z" level=info msg="connecting to shim a6341229d08bd36ea48e543a531d0b2091be02f779636e16382cbaf042ae6faa" address="unix:///run/containerd/s/f09edfdb7b909135d842624bb9204e9f056e36de92a11856c862ec5cd7bec266" protocol=ttrpc version=3 Jan 14 13:30:12.253385 containerd[1649]: time="2026-01-14T13:30:12.252704275Z" level=info msg="StartContainer for \"cf490e58149b72c7f7169beb16d2419d7618283c3dc34884fac707ed0e13837c\"" Jan 14 13:30:12.269561 containerd[1649]: time="2026-01-14T13:30:12.269410195Z" level=info msg="connecting to shim cf490e58149b72c7f7169beb16d2419d7618283c3dc34884fac707ed0e13837c" address="unix:///run/containerd/s/5d77bd00927006470bc15dd4fa46dcc4019df641c2e72ded37760c320405aa97" protocol=ttrpc version=3 Jan 14 13:30:12.273475 sshd[5534]: Connection closed by 10.0.0.1 port 49336 Jan 14 13:30:12.275468 sshd-session[5461]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:12.284515 containerd[1649]: time="2026-01-14T13:30:12.284355212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:30:12.283000 audit[5461]: USER_END pid=5461 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:12.284000 audit[5461]: CRED_DISP pid=5461 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:12.294720 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:49336.service: Deactivated successfully. Jan 14 13:30:12.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.26:22-10.0.0.1:49336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:12.300707 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 13:30:12.305424 containerd[1649]: time="2026-01-14T13:30:12.305255068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:30:12.305424 containerd[1649]: time="2026-01-14T13:30:12.305339154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:30:12.305895 systemd-logind[1630]: Session 23 logged out. Waiting for processes to exit. Jan 14 13:30:12.307447 kubelet[2886]: E0114 13:30:12.306856 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:30:12.307447 kubelet[2886]: E0114 13:30:12.306902 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:30:12.307447 kubelet[2886]: E0114 13:30:12.307303 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j8j4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76d688f66-n8bg2_calico-system(2d3c1365-6a1f-45b8-8652-2b261d46979e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:30:12.309754 kubelet[2886]: E0114 13:30:12.308993 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:30:12.321399 systemd[1]: Started cri-containerd-a6341229d08bd36ea48e543a531d0b2091be02f779636e16382cbaf042ae6faa.scope - libcontainer container a6341229d08bd36ea48e543a531d0b2091be02f779636e16382cbaf042ae6faa. Jan 14 13:30:12.323849 systemd-logind[1630]: Removed session 23. Jan 14 13:30:12.350834 systemd[1]: Started cri-containerd-cf490e58149b72c7f7169beb16d2419d7618283c3dc34884fac707ed0e13837c.scope - libcontainer container cf490e58149b72c7f7169beb16d2419d7618283c3dc34884fac707ed0e13837c. Jan 14 13:30:12.360518 systemd[1]: Started cri-containerd-69543df133a0513da3ab08f0e0134459089378d5a69c8135abd615919a4666d2.scope - libcontainer container 69543df133a0513da3ab08f0e0134459089378d5a69c8135abd615919a4666d2. Jan 14 13:30:12.400000 audit: BPF prog-id=261 op=LOAD Jan 14 13:30:12.403000 audit: BPF prog-id=262 op=LOAD Jan 14 13:30:12.403000 audit[5549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3038 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333431323239643038626433366561343865353433613533316430 Jan 14 13:30:12.404000 audit: BPF prog-id=262 op=UNLOAD Jan 14 13:30:12.404000 audit[5549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333431323239643038626433366561343865353433613533316430 Jan 14 13:30:12.404000 audit: BPF prog-id=263 op=LOAD Jan 14 13:30:12.404000 audit[5549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3038 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333431323239643038626433366561343865353433613533316430 Jan 14 13:30:12.405000 audit: BPF prog-id=264 op=LOAD Jan 14 13:30:12.405000 audit[5549]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3038 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333431323239643038626433366561343865353433613533316430 Jan 14 13:30:12.405000 audit: BPF prog-id=264 op=UNLOAD Jan 14 13:30:12.405000 audit[5549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333431323239643038626433366561343865353433613533316430 Jan 14 13:30:12.405000 audit: BPF prog-id=263 op=UNLOAD Jan 14 13:30:12.405000 audit[5549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3038 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333431323239643038626433366561343865353433613533316430 Jan 14 13:30:12.405000 audit: BPF prog-id=265 op=LOAD Jan 14 13:30:12.405000 audit: BPF prog-id=266 op=LOAD Jan 14 13:30:12.405000 audit[5549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3038 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136333431323239643038626433366561343865353433613533316430 Jan 14 13:30:12.410000 audit: BPF prog-id=267 op=LOAD Jan 14 13:30:12.410000 audit[5558]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2569 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366343930653538313439623732633766373136396265623136643234 Jan 14 13:30:12.412000 audit: BPF prog-id=267 op=UNLOAD Jan 14 13:30:12.412000 audit[5558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366343930653538313439623732633766373136396265623136643234 Jan 14 13:30:12.414000 audit: BPF prog-id=268 op=LOAD Jan 14 13:30:12.414000 audit[5558]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2569 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366343930653538313439623732633766373136396265623136643234 Jan 14 13:30:12.415000 audit: BPF prog-id=269 op=LOAD Jan 14 13:30:12.415000 audit[5558]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2569 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366343930653538313439623732633766373136396265623136643234 Jan 14 13:30:12.415000 audit: BPF prog-id=269 op=UNLOAD Jan 14 13:30:12.415000 audit[5558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366343930653538313439623732633766373136396265623136643234 Jan 14 13:30:12.415000 audit: BPF prog-id=268 op=UNLOAD Jan 14 13:30:12.415000 audit[5558]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2569 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366343930653538313439623732633766373136396265623136643234 Jan 14 13:30:12.415000 audit: BPF prog-id=270 op=LOAD Jan 14 13:30:12.415000 audit[5558]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2569 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366343930653538313439623732633766373136396265623136643234 Jan 14 13:30:12.421000 audit: BPF prog-id=271 op=LOAD Jan 14 13:30:12.424000 audit: BPF prog-id=272 op=LOAD Jan 14 13:30:12.424000 audit[5548]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2573 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639353433646631333361303531336461336162303866306530313334 Jan 14 13:30:12.425000 audit: BPF prog-id=272 op=UNLOAD Jan 14 13:30:12.425000 audit[5548]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.425000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639353433646631333361303531336461336162303866306530313334 Jan 14 13:30:12.427000 audit: BPF prog-id=273 op=LOAD Jan 14 13:30:12.427000 audit[5548]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2573 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639353433646631333361303531336461336162303866306530313334 Jan 14 13:30:12.427000 audit: BPF prog-id=274 op=LOAD Jan 14 13:30:12.427000 audit[5548]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2573 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639353433646631333361303531336461336162303866306530313334 Jan 14 13:30:12.427000 audit: BPF prog-id=274 op=UNLOAD Jan 14 13:30:12.427000 audit[5548]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639353433646631333361303531336461336162303866306530313334 Jan 14 13:30:12.428000 audit: BPF prog-id=273 op=UNLOAD Jan 14 13:30:12.428000 audit[5548]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639353433646631333361303531336461336162303866306530313334 Jan 14 13:30:12.430000 audit: BPF prog-id=275 op=LOAD Jan 14 13:30:12.430000 audit[5548]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2573 pid=5548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:12.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639353433646631333361303531336461336162303866306530313334 Jan 14 13:30:12.520740 containerd[1649]: time="2026-01-14T13:30:12.517931369Z" level=info msg="StartContainer for \"a6341229d08bd36ea48e543a531d0b2091be02f779636e16382cbaf042ae6faa\" returns successfully" Jan 14 13:30:12.569472 containerd[1649]: time="2026-01-14T13:30:12.569290798Z" level=info msg="StartContainer for \"cf490e58149b72c7f7169beb16d2419d7618283c3dc34884fac707ed0e13837c\" returns successfully" Jan 14 13:30:12.620995 containerd[1649]: time="2026-01-14T13:30:12.620962901Z" level=info msg="StartContainer for \"69543df133a0513da3ab08f0e0134459089378d5a69c8135abd615919a4666d2\" returns successfully" Jan 14 13:30:12.783812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641661847.mount: Deactivated successfully. Jan 14 13:30:12.784406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667230290.mount: Deactivated successfully. Jan 14 13:30:13.120330 kubelet[2886]: E0114 13:30:13.118423 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:13.131651 kubelet[2886]: E0114 13:30:13.130998 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:13.887804 containerd[1649]: time="2026-01-14T13:30:13.887652529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:30:13.968301 containerd[1649]: time="2026-01-14T13:30:13.967955096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:30:13.973690 containerd[1649]: time="2026-01-14T13:30:13.973651698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:30:13.973897 containerd[1649]: time="2026-01-14T13:30:13.973819459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:30:13.976512 kubelet[2886]: E0114 13:30:13.975826 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:30:13.977486 kubelet[2886]: E0114 13:30:13.975985 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:30:13.977553 kubelet[2886]: E0114 13:30:13.977480 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4xm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:30:13.987520 containerd[1649]: time="2026-01-14T13:30:13.985636866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:30:14.112262 containerd[1649]: time="2026-01-14T13:30:14.111425663Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:30:14.117498 containerd[1649]: time="2026-01-14T13:30:14.117355406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:30:14.117678 containerd[1649]: time="2026-01-14T13:30:14.117558824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:30:14.118390 kubelet[2886]: E0114 13:30:14.117921 2886 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:30:14.118390 kubelet[2886]: E0114 13:30:14.118354 2886 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:30:14.118566 kubelet[2886]: E0114 13:30:14.118508 2886 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4xm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ckktd_calico-system(8200b33d-eb45-4c93-98d1-0c3029a31280): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:30:14.119960 kubelet[2886]: E0114 13:30:14.119879 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:30:14.160593 kubelet[2886]: E0114 13:30:14.158292 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:15.169693 kubelet[2886]: E0114 13:30:15.169663 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:16.179597 kubelet[2886]: E0114 13:30:16.178625 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:17.294787 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:41930.service - OpenSSH per-connection server daemon (10.0.0.1:41930). Jan 14 13:30:17.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.26:22-10.0.0.1:41930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:17.304284 kernel: kauditd_printk_skb: 76 callbacks suppressed Jan 14 13:30:17.304386 kernel: audit: type=1130 audit(1768397417.293:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.26:22-10.0.0.1:41930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:17.438000 audit[5656]: USER_ACCT pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.439791 sshd[5656]: Accepted publickey for core from 10.0.0.1 port 41930 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:17.443701 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:17.456837 systemd-logind[1630]: New session 24 of user core. Jan 14 13:30:17.438000 audit[5656]: CRED_ACQ pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.496691 kernel: audit: type=1101 audit(1768397417.438:891): pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.496781 kernel: audit: type=1103 audit(1768397417.438:892): pid=5656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.496817 kernel: audit: type=1006 audit(1768397417.438:893): pid=5656 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 13:30:17.513787 kernel: audit: type=1300 audit(1768397417.438:893): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf71bad90 a2=3 a3=0 items=0 ppid=1 pid=5656 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:17.438000 audit[5656]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf71bad90 a2=3 a3=0 items=0 ppid=1 pid=5656 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:17.438000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:17.559359 kernel: audit: type=1327 audit(1768397417.438:893): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:17.568663 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 13:30:17.573000 audit[5656]: USER_START pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.577000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.642929 kernel: audit: type=1105 audit(1768397417.573:894): pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.643370 kernel: audit: type=1103 audit(1768397417.577:895): pid=5660 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.839364 sshd[5660]: Connection closed by 10.0.0.1 port 41930 Jan 14 13:30:17.840755 sshd-session[5656]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:17.846000 audit[5656]: USER_END pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.850574 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:41930.service: Deactivated successfully. Jan 14 13:30:17.855726 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 13:30:17.859958 systemd-logind[1630]: Session 24 logged out. Waiting for processes to exit. Jan 14 13:30:17.864403 systemd-logind[1630]: Removed session 24. Jan 14 13:30:17.892520 kernel: audit: type=1106 audit(1768397417.846:896): pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.892615 kernel: audit: type=1104 audit(1768397417.846:897): pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.846000 audit[5656]: CRED_DISP pid=5656 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:17.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.26:22-10.0.0.1:41930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:18.556624 kubelet[2886]: E0114 13:30:18.556391 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:19.875398 kubelet[2886]: E0114 13:30:19.874717 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:22.861514 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:41946.service - OpenSSH per-connection server daemon (10.0.0.1:41946). Jan 14 13:30:22.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.26:22-10.0.0.1:41946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:22.870425 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:30:22.870495 kernel: audit: type=1130 audit(1768397422.860:899): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.26:22-10.0.0.1:41946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:22.880444 kubelet[2886]: E0114 13:30:22.878720 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:22.882515 kubelet[2886]: E0114 13:30:22.881744 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:30:22.885500 kubelet[2886]: E0114 13:30:22.884895 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:30:22.994000 audit[5673]: USER_ACCT pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:22.996562 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 41946 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:23.001291 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:23.018579 systemd-logind[1630]: New session 25 of user core. Jan 14 13:30:22.998000 audit[5673]: CRED_ACQ pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:23.066524 kernel: audit: type=1101 audit(1768397422.994:900): pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:23.066595 kernel: audit: type=1103 audit(1768397422.998:901): pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:23.066640 kernel: audit: type=1006 audit(1768397422.998:902): pid=5673 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 13:30:23.086534 kernel: audit: type=1300 audit(1768397422.998:902): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccac4fed0 a2=3 a3=0 items=0 ppid=1 pid=5673 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:22.998000 audit[5673]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccac4fed0 a2=3 a3=0 items=0 ppid=1 pid=5673 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:22.998000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:23.138831 kernel: audit: type=1327 audit(1768397422.998:902): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:23.148742 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 13:30:23.154000 audit[5673]: USER_START pid=5673 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:23.196267 kernel: audit: type=1105 audit(1768397423.154:903): pid=5673 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:23.157000 audit[5677]: CRED_ACQ pid=5677 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:23.224296 kernel: audit: type=1103 audit(1768397423.157:904): pid=5677 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:23.411477 sshd[5677]: Connection closed by 10.0.0.1 port 41946 Jan 14 13:30:23.413356 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:23.417000 audit[5673]: USER_END pid=5673 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:23.424605 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:41946.service: Deactivated successfully. Jan 14 13:30:23.424663 systemd-logind[1630]: Session 25 logged out. Waiting for processes to exit. Jan 14 13:30:23.431663 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 13:30:23.438258 systemd-logind[1630]: Removed session 25. Jan 14 13:30:23.417000 audit[5673]: CRED_DISP pid=5673 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:23.463615 kernel: audit: type=1106 audit(1768397423.417:905): pid=5673 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:23.463653 kernel: audit: type=1104 audit(1768397423.417:906): pid=5673 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:23.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.26:22-10.0.0.1:41946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:24.874828 kubelet[2886]: E0114 13:30:24.874619 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:30:24.913790 kubelet[2886]: E0114 13:30:24.913644 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:25.223370 kubelet[2886]: E0114 13:30:25.222833 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:26.879273 kubelet[2886]: E0114 13:30:26.878914 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:30:26.880535 kubelet[2886]: E0114 13:30:26.880306 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:30:26.882431 kubelet[2886]: E0114 13:30:26.882391 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:30:28.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.26:22-10.0.0.1:47676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:28.436607 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:47676.service - OpenSSH per-connection server daemon (10.0.0.1:47676). Jan 14 13:30:28.444749 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:30:28.444809 kernel: audit: type=1130 audit(1768397428.435:908): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.26:22-10.0.0.1:47676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:28.569932 kubelet[2886]: E0114 13:30:28.569899 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:28.584000 audit[5718]: USER_ACCT pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:28.593269 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:28.593706 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 47676 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:28.624621 systemd-logind[1630]: New session 26 of user core. Jan 14 13:30:28.588000 audit[5718]: CRED_ACQ pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:28.668701 kernel: audit: type=1101 audit(1768397428.584:909): pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:28.668779 kernel: audit: type=1103 audit(1768397428.588:910): pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:28.588000 audit[5718]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeebbb98c0 a2=3 a3=0 items=0 ppid=1 pid=5718 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:28.733609 kernel: audit: type=1006 audit(1768397428.588:911): pid=5718 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 13:30:28.733762 kernel: audit: type=1300 audit(1768397428.588:911): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeebbb98c0 a2=3 a3=0 items=0 ppid=1 pid=5718 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:28.733786 kernel: audit: type=1327 audit(1768397428.588:911): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:28.588000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:28.734583 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 13:30:28.749490 kernel: audit: type=1105 audit(1768397428.741:912): pid=5718 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:28.741000 audit[5718]: USER_START pid=5718 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:28.745000 audit[5722]: CRED_ACQ pid=5722 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:28.828407 kernel: audit: type=1103 audit(1768397428.745:913): pid=5722 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:28.940471 sshd[5722]: Connection closed by 10.0.0.1 port 47676 Jan 14 13:30:28.941226 sshd-session[5718]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:28.945000 audit[5718]: USER_END pid=5718 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:28.953594 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:47676.service: Deactivated successfully. Jan 14 13:30:28.957887 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 13:30:28.962709 systemd-logind[1630]: Session 26 logged out. Waiting for processes to exit. Jan 14 13:30:28.966797 systemd-logind[1630]: Removed session 26. Jan 14 13:30:28.995746 kernel: audit: type=1106 audit(1768397428.945:914): pid=5718 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:28.945000 audit[5718]: CRED_DISP pid=5718 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:28.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.26:22-10.0.0.1:47676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:29.023313 kernel: audit: type=1104 audit(1768397428.945:915): pid=5718 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:33.888473 kubelet[2886]: E0114 13:30:33.887818 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:30:33.963498 systemd[1]: Started sshd@25-10.0.0.26:22-10.0.0.1:47688.service - OpenSSH per-connection server daemon (10.0.0.1:47688). Jan 14 13:30:33.989750 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:30:33.989805 kernel: audit: type=1130 audit(1768397433.962:917): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.26:22-10.0.0.1:47688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:33.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.26:22-10.0.0.1:47688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:34.228000 audit[5743]: USER_ACCT pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.236786 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:34.255469 sshd[5743]: Accepted publickey for core from 10.0.0.1 port 47688 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:34.277284 kernel: audit: type=1101 audit(1768397434.228:918): pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.277400 kernel: audit: type=1103 audit(1768397434.233:919): pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.233000 audit[5743]: CRED_ACQ pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.303306 systemd-logind[1630]: New session 27 of user core. Jan 14 13:30:34.326749 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 13:30:34.353318 kernel: audit: type=1006 audit(1768397434.233:920): pid=5743 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 13:30:34.233000 audit[5743]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd7b62d70 a2=3 a3=0 items=0 ppid=1 pid=5743 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:34.401359 kernel: audit: type=1300 audit(1768397434.233:920): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd7b62d70 a2=3 a3=0 items=0 ppid=1 pid=5743 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:34.233000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:34.343000 audit[5743]: USER_START pid=5743 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.462415 kernel: audit: type=1327 audit(1768397434.233:920): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:34.462524 kernel: audit: type=1105 audit(1768397434.343:921): pid=5743 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.349000 audit[5747]: CRED_ACQ pid=5747 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.500066 kernel: audit: type=1103 audit(1768397434.349:922): pid=5747 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.594386 sshd[5747]: Connection closed by 10.0.0.1 port 47688 Jan 14 13:30:34.595489 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:34.599000 audit[5743]: USER_END pid=5743 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.606240 systemd[1]: Started sshd@26-10.0.0.26:22-10.0.0.1:35586.service - OpenSSH per-connection server daemon (10.0.0.1:35586). Jan 14 13:30:34.610424 systemd[1]: sshd@25-10.0.0.26:22-10.0.0.1:47688.service: Deactivated successfully. Jan 14 13:30:34.617903 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 13:30:34.626406 systemd-logind[1630]: Session 27 logged out. Waiting for processes to exit. Jan 14 13:30:34.630931 systemd-logind[1630]: Removed session 27. Jan 14 13:30:34.599000 audit[5743]: CRED_DISP pid=5743 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.26:22-10.0.0.1:35586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:34.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.26:22-10.0.0.1:47688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:34.655256 kernel: audit: type=1106 audit(1768397434.599:923): pid=5743 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.655325 kernel: audit: type=1104 audit(1768397434.599:924): pid=5743 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.852000 audit[5757]: USER_ACCT pid=5757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.854866 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 35586 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:34.858000 audit[5757]: CRED_ACQ pid=5757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.858000 audit[5757]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd58e28db0 a2=3 a3=0 items=0 ppid=1 pid=5757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:34.858000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:34.863458 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:34.878745 systemd-logind[1630]: New session 28 of user core. Jan 14 13:30:34.888502 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 13:30:34.897000 audit[5757]: USER_START pid=5757 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:34.903000 audit[5764]: CRED_ACQ pid=5764 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:35.784381 sshd[5764]: Connection closed by 10.0.0.1 port 35586 Jan 14 13:30:35.785625 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:35.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.26:22-10.0.0.1:35592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:35.811715 systemd[1]: Started sshd@27-10.0.0.26:22-10.0.0.1:35592.service - OpenSSH per-connection server daemon (10.0.0.1:35592). Jan 14 13:30:35.828000 audit[5757]: USER_END pid=5757 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:35.829000 audit[5757]: CRED_DISP pid=5757 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:35.837695 systemd[1]: sshd@26-10.0.0.26:22-10.0.0.1:35586.service: Deactivated successfully. Jan 14 13:30:35.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.26:22-10.0.0.1:35586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:35.842903 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 13:30:35.849718 systemd-logind[1630]: Session 28 logged out. Waiting for processes to exit. Jan 14 13:30:35.853459 systemd-logind[1630]: Removed session 28. Jan 14 13:30:35.881946 kubelet[2886]: E0114 13:30:35.881820 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:30:35.883507 kubelet[2886]: E0114 13:30:35.883286 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:30:36.091000 audit[5773]: USER_ACCT pid=5773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:36.093665 sshd[5773]: Accepted publickey for core from 10.0.0.1 port 35592 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:36.097000 audit[5773]: CRED_ACQ pid=5773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:36.097000 audit[5773]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc3d0c440 a2=3 a3=0 items=0 ppid=1 pid=5773 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:36.097000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:36.101413 sshd-session[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:36.119883 systemd-logind[1630]: New session 29 of user core. Jan 14 13:30:36.136754 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 13:30:36.142000 audit[5773]: USER_START pid=5773 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:36.148000 audit[5780]: CRED_ACQ pid=5780 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:37.567000 audit[5793]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:30:37.567000 audit[5793]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd5b7875d0 a2=0 a3=7ffd5b7875bc items=0 ppid=3003 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:37.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:30:37.586000 audit[5793]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:30:37.616241 sshd[5780]: Connection closed by 10.0.0.1 port 35592 Jan 14 13:30:37.616898 sshd-session[5773]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:37.621000 audit[5773]: USER_END pid=5773 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:37.622000 audit[5773]: CRED_DISP pid=5773 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:37.631713 systemd[1]: sshd@27-10.0.0.26:22-10.0.0.1:35592.service: Deactivated successfully. Jan 14 13:30:37.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.26:22-10.0.0.1:35592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:37.635745 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 13:30:37.646414 systemd-logind[1630]: Session 29 logged out. Waiting for processes to exit. Jan 14 13:30:37.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.26:22-10.0.0.1:35594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:37.657355 systemd[1]: Started sshd@28-10.0.0.26:22-10.0.0.1:35594.service - OpenSSH per-connection server daemon (10.0.0.1:35594). Jan 14 13:30:37.659368 systemd-logind[1630]: Removed session 29. Jan 14 13:30:37.586000 audit[5793]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd5b7875d0 a2=0 a3=0 items=0 ppid=3003 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:37.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:30:37.703000 audit[5800]: NETFILTER_CFG table=filter:144 family=2 entries=38 op=nft_register_rule pid=5800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:30:37.703000 audit[5800]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffcbaedcb0 a2=0 a3=7fffcbaedc9c items=0 ppid=3003 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:37.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:30:37.711000 audit[5800]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:30:37.711000 audit[5800]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffcbaedcb0 a2=0 a3=0 items=0 ppid=3003 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:37.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:30:37.786000 audit[5798]: USER_ACCT pid=5798 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:37.794000 audit[5798]: CRED_ACQ pid=5798 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:37.795723 sshd[5798]: Accepted publickey for core from 10.0.0.1 port 35594 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:37.795000 audit[5798]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf2bbf810 a2=3 a3=0 items=0 ppid=1 pid=5798 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:37.795000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:37.798406 sshd-session[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:37.816918 systemd-logind[1630]: New session 30 of user core. Jan 14 13:30:37.830498 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 13:30:37.842000 audit[5798]: USER_START pid=5798 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:37.846000 audit[5804]: CRED_ACQ pid=5804 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:38.474281 sshd[5804]: Connection closed by 10.0.0.1 port 35594 Jan 14 13:30:38.474488 sshd-session[5798]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:38.478000 audit[5798]: USER_END pid=5798 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:38.479000 audit[5798]: CRED_DISP pid=5798 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:38.491725 systemd[1]: sshd@28-10.0.0.26:22-10.0.0.1:35594.service: Deactivated successfully. Jan 14 13:30:38.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.26:22-10.0.0.1:35594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:38.501068 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 13:30:38.506917 systemd-logind[1630]: Session 30 logged out. Waiting for processes to exit. Jan 14 13:30:38.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.26:22-10.0.0.1:35606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:38.512761 systemd[1]: Started sshd@29-10.0.0.26:22-10.0.0.1:35606.service - OpenSSH per-connection server daemon (10.0.0.1:35606). Jan 14 13:30:38.518698 systemd-logind[1630]: Removed session 30. Jan 14 13:30:38.637000 audit[5815]: USER_ACCT pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:38.639742 sshd[5815]: Accepted publickey for core from 10.0.0.1 port 35606 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:38.642000 audit[5815]: CRED_ACQ pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:38.642000 audit[5815]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc6640620 a2=3 a3=0 items=0 ppid=1 pid=5815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:38.642000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:38.645906 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:38.659907 systemd-logind[1630]: New session 31 of user core. Jan 14 13:30:38.672536 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 14 13:30:38.683000 audit[5815]: USER_START pid=5815 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:38.688000 audit[5820]: CRED_ACQ pid=5820 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:38.988639 sshd[5820]: Connection closed by 10.0.0.1 port 35606 Jan 14 13:30:38.990687 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:39.030409 kernel: kauditd_printk_skb: 54 callbacks suppressed Jan 14 13:30:39.030570 kernel: audit: type=1106 audit(1768397438.994:963): pid=5815 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:38.994000 audit[5815]: USER_END pid=5815 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:39.004628 systemd[1]: sshd@29-10.0.0.26:22-10.0.0.1:35606.service: Deactivated successfully. Jan 14 13:30:39.012784 systemd[1]: session-31.scope: Deactivated successfully. Jan 14 13:30:39.020612 systemd-logind[1630]: Session 31 logged out. Waiting for processes to exit. Jan 14 13:30:39.023289 systemd-logind[1630]: Removed session 31. Jan 14 13:30:39.054503 kernel: audit: type=1104 audit(1768397438.995:964): pid=5815 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:38.995000 audit[5815]: CRED_DISP pid=5815 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:39.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.26:22-10.0.0.1:35606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:39.092275 kernel: audit: type=1131 audit(1768397439.004:965): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.26:22-10.0.0.1:35606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:39.881812 kubelet[2886]: E0114 13:30:39.881664 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:30:41.878407 kubelet[2886]: E0114 13:30:41.877611 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:30:41.879848 kubelet[2886]: E0114 13:30:41.878703 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:30:44.013317 systemd[1]: Started sshd@30-10.0.0.26:22-10.0.0.1:35622.service - OpenSSH per-connection server daemon (10.0.0.1:35622). Jan 14 13:30:44.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.26:22-10.0.0.1:35622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:44.049482 kernel: audit: type=1130 audit(1768397444.012:966): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.26:22-10.0.0.1:35622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:44.175000 audit[5837]: USER_ACCT pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.178488 sshd[5837]: Accepted publickey for core from 10.0.0.1 port 35622 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:44.182426 sshd-session[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:44.178000 audit[5837]: CRED_ACQ pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.225473 systemd-logind[1630]: New session 32 of user core. Jan 14 13:30:44.253829 kernel: audit: type=1101 audit(1768397444.175:967): pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.253933 kernel: audit: type=1103 audit(1768397444.178:968): pid=5837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.278420 kernel: audit: type=1006 audit(1768397444.179:969): pid=5837 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 14 13:30:44.278567 kernel: audit: type=1300 audit(1768397444.179:969): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff225965e0 a2=3 a3=0 items=0 ppid=1 pid=5837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:44.179000 audit[5837]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff225965e0 a2=3 a3=0 items=0 ppid=1 pid=5837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:44.294617 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 14 13:30:44.179000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:44.338905 kernel: audit: type=1327 audit(1768397444.179:969): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:44.339367 kernel: audit: type=1105 audit(1768397444.310:970): pid=5837 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.310000 audit[5837]: USER_START pid=5837 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.318000 audit[5842]: CRED_ACQ pid=5842 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.426409 kernel: audit: type=1103 audit(1768397444.318:971): pid=5842 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.593770 sshd[5842]: Connection closed by 10.0.0.1 port 35622 Jan 14 13:30:44.593790 sshd-session[5837]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:44.596000 audit[5837]: USER_END pid=5837 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.608694 systemd[1]: sshd@30-10.0.0.26:22-10.0.0.1:35622.service: Deactivated successfully. Jan 14 13:30:44.615626 systemd[1]: session-32.scope: Deactivated successfully. Jan 14 13:30:44.624931 systemd-logind[1630]: Session 32 logged out. Waiting for processes to exit. Jan 14 13:30:44.626798 systemd-logind[1630]: Removed session 32. Jan 14 13:30:44.650325 kernel: audit: type=1106 audit(1768397444.596:972): pid=5837 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.650493 kernel: audit: type=1104 audit(1768397444.596:973): pid=5837 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.596000 audit[5837]: CRED_DISP pid=5837 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:44.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.26:22-10.0.0.1:35622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:46.877265 kubelet[2886]: E0114 13:30:46.876355 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:30:46.878681 kubelet[2886]: E0114 13:30:46.878432 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:30:47.877291 kubelet[2886]: E0114 13:30:47.876850 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:30:48.874590 kubelet[2886]: E0114 13:30:48.873876 2886 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 13:30:49.055000 audit[5855]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:30:49.063713 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:30:49.063802 kernel: audit: type=1325 audit(1768397449.055:975): table=filter:146 family=2 entries=26 op=nft_register_rule pid=5855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:30:49.087410 kernel: audit: type=1300 audit(1768397449.055:975): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc7b6d4360 a2=0 a3=7ffc7b6d434c items=0 ppid=3003 pid=5855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:49.055000 audit[5855]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc7b6d4360 a2=0 a3=7ffc7b6d434c items=0 ppid=3003 pid=5855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:49.055000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:30:49.151273 kernel: audit: type=1327 audit(1768397449.055:975): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:30:49.156000 audit[5855]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=5855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:30:49.156000 audit[5855]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc7b6d4360 a2=0 a3=7ffc7b6d434c items=0 ppid=3003 pid=5855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:49.228231 kernel: audit: type=1325 audit(1768397449.156:976): table=nat:147 family=2 entries=104 op=nft_register_chain pid=5855 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:30:49.228375 kernel: audit: type=1300 audit(1768397449.156:976): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc7b6d4360 a2=0 a3=7ffc7b6d434c items=0 ppid=3003 pid=5855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:49.228408 kernel: audit: type=1327 audit(1768397449.156:976): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:30:49.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:30:49.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.26:22-10.0.0.1:45246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:49.610448 systemd[1]: Started sshd@31-10.0.0.26:22-10.0.0.1:45246.service - OpenSSH per-connection server daemon (10.0.0.1:45246). Jan 14 13:30:49.637385 kernel: audit: type=1130 audit(1768397449.609:977): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.26:22-10.0.0.1:45246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:49.763000 audit[5857]: USER_ACCT pid=5857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:49.770844 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:49.776793 sshd[5857]: Accepted publickey for core from 10.0.0.1 port 45246 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:49.783535 systemd-logind[1630]: New session 33 of user core. Jan 14 13:30:49.766000 audit[5857]: CRED_ACQ pid=5857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:49.828380 kernel: audit: type=1101 audit(1768397449.763:978): pid=5857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:49.828482 kernel: audit: type=1103 audit(1768397449.766:979): pid=5857 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:49.828515 kernel: audit: type=1006 audit(1768397449.766:980): pid=5857 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 14 13:30:49.766000 audit[5857]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdeb11bac0 a2=3 a3=0 items=0 ppid=1 pid=5857 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:49.766000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:49.853500 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 14 13:30:49.860000 audit[5857]: USER_START pid=5857 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:49.864000 audit[5861]: CRED_ACQ pid=5861 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:50.096812 sshd[5861]: Connection closed by 10.0.0.1 port 45246 Jan 14 13:30:50.097571 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:50.105000 audit[5857]: USER_END pid=5857 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:50.105000 audit[5857]: CRED_DISP pid=5857 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:50.110624 systemd[1]: sshd@31-10.0.0.26:22-10.0.0.1:45246.service: Deactivated successfully. Jan 14 13:30:50.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.26:22-10.0.0.1:45246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:50.117432 systemd[1]: session-33.scope: Deactivated successfully. Jan 14 13:30:50.123894 systemd-logind[1630]: Session 33 logged out. Waiting for processes to exit. Jan 14 13:30:50.129917 systemd-logind[1630]: Removed session 33. Jan 14 13:30:52.878565 kubelet[2886]: E0114 13:30:52.878396 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-kb2gl" podUID="fc822dd2-4a0b-4df8-969d-8ce5598b7069" Jan 14 13:30:54.882570 kubelet[2886]: E0114 13:30:54.879746 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76d688f66-n8bg2" podUID="2d3c1365-6a1f-45b8-8652-2b261d46979e" Jan 14 13:30:55.126943 systemd[1]: Started sshd@32-10.0.0.26:22-10.0.0.1:59628.service - OpenSSH per-connection server daemon (10.0.0.1:59628). Jan 14 13:30:55.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.26:22-10.0.0.1:59628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:55.135366 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 13:30:55.135442 kernel: audit: type=1130 audit(1768397455.126:986): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.26:22-10.0.0.1:59628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:55.309000 audit[5899]: USER_ACCT pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.311414 sshd[5899]: Accepted publickey for core from 10.0.0.1 port 59628 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:30:55.337825 sshd-session[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:30:55.341294 kernel: audit: type=1101 audit(1768397455.309:987): pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.333000 audit[5899]: CRED_ACQ pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.370375 kernel: audit: type=1103 audit(1768397455.333:988): pid=5899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.377275 systemd-logind[1630]: New session 34 of user core. Jan 14 13:30:55.334000 audit[5899]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7bbc4ee0 a2=3 a3=0 items=0 ppid=1 pid=5899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:55.420345 kernel: audit: type=1006 audit(1768397455.334:989): pid=5899 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 14 13:30:55.420435 kernel: audit: type=1300 audit(1768397455.334:989): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7bbc4ee0 a2=3 a3=0 items=0 ppid=1 pid=5899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:30:55.420461 kernel: audit: type=1327 audit(1768397455.334:989): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:55.334000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:30:55.436675 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 14 13:30:55.442000 audit[5899]: USER_START pid=5899 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.482753 kernel: audit: type=1105 audit(1768397455.442:990): pid=5899 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.482831 kernel: audit: type=1103 audit(1768397455.446:991): pid=5903 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.446000 audit[5903]: CRED_ACQ pid=5903 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.726426 sshd[5903]: Connection closed by 10.0.0.1 port 59628 Jan 14 13:30:55.727448 sshd-session[5899]: pam_unix(sshd:session): session closed for user core Jan 14 13:30:55.728000 audit[5899]: USER_END pid=5899 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.773502 kernel: audit: type=1106 audit(1768397455.728:992): pid=5899 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.729000 audit[5899]: CRED_DISP pid=5899 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.777581 systemd[1]: sshd@32-10.0.0.26:22-10.0.0.1:59628.service: Deactivated successfully. Jan 14 13:30:55.781922 systemd[1]: session-34.scope: Deactivated successfully. Jan 14 13:30:55.785727 systemd-logind[1630]: Session 34 logged out. Waiting for processes to exit. Jan 14 13:30:55.787460 systemd-logind[1630]: Removed session 34. Jan 14 13:30:55.805294 kernel: audit: type=1104 audit(1768397455.729:993): pid=5899 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:30:55.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.26:22-10.0.0.1:59628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:30:56.876272 kubelet[2886]: E0114 13:30:56.875813 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ckktd" podUID="8200b33d-eb45-4c93-98d1-0c3029a31280" Jan 14 13:30:57.874359 kubelet[2886]: E0114 13:30:57.874286 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68b6f8f57b-4vsgx" podUID="43c81015-17c1-4886-ba54-03a8237f3050" Jan 14 13:30:58.877300 kubelet[2886]: E0114 13:30:58.877251 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h2gf2" podUID="97139d64-ebd5-495e-81ad-3f4aa4c54bfd" Jan 14 13:30:59.874378 kubelet[2886]: E0114 13:30:59.873449 2886 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c96748b8f-wwf76" podUID="1356d1d1-69e1-470e-955d-5a3a9ab090a6" Jan 14 13:31:00.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.26:22-10.0.0.1:59644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:31:00.744838 systemd[1]: Started sshd@33-10.0.0.26:22-10.0.0.1:59644.service - OpenSSH per-connection server daemon (10.0.0.1:59644). Jan 14 13:31:00.750948 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:31:00.751319 kernel: audit: type=1130 audit(1768397460.743:995): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.26:22-10.0.0.1:59644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:31:00.856000 audit[5916]: USER_ACCT pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:00.857532 sshd[5916]: Accepted publickey for core from 10.0.0.1 port 59644 ssh2: RSA SHA256:6ImhlCg2Y75dQ4DnaE2aO9dHLur/A4YXKF0wGnkswcQ Jan 14 13:31:00.860874 sshd-session[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:31:00.868548 systemd-logind[1630]: New session 35 of user core. Jan 14 13:31:00.858000 audit[5916]: CRED_ACQ pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:00.926787 kernel: audit: type=1101 audit(1768397460.856:996): pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:00.926865 kernel: audit: type=1103 audit(1768397460.858:997): pid=5916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:00.858000 audit[5916]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea4a05e30 a2=3 a3=0 items=0 ppid=1 pid=5916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:31:00.987886 kernel: audit: type=1006 audit(1768397460.858:998): pid=5916 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 14 13:31:00.989533 kernel: audit: type=1300 audit(1768397460.858:998): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea4a05e30 a2=3 a3=0 items=0 ppid=1 pid=5916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:31:00.990434 kernel: audit: type=1327 audit(1768397460.858:998): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:31:00.858000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:31:01.004548 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 14 13:31:01.012000 audit[5916]: USER_START pid=5916 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:01.016000 audit[5920]: CRED_ACQ pid=5920 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:01.089680 kernel: audit: type=1105 audit(1768397461.012:999): pid=5916 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:01.089751 kernel: audit: type=1103 audit(1768397461.016:1000): pid=5920 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:01.223740 sshd[5920]: Connection closed by 10.0.0.1 port 59644 Jan 14 13:31:01.224608 sshd-session[5916]: pam_unix(sshd:session): session closed for user core Jan 14 13:31:01.228000 audit[5916]: USER_END pid=5916 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:01.233938 systemd-logind[1630]: Session 35 logged out. Waiting for processes to exit. Jan 14 13:31:01.235534 systemd[1]: sshd@33-10.0.0.26:22-10.0.0.1:59644.service: Deactivated successfully. Jan 14 13:31:01.243842 systemd[1]: session-35.scope: Deactivated successfully. Jan 14 13:31:01.250740 systemd-logind[1630]: Removed session 35. Jan 14 13:31:01.228000 audit[5916]: CRED_DISP pid=5916 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:01.305488 kernel: audit: type=1106 audit(1768397461.228:1001): pid=5916 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:01.305580 kernel: audit: type=1104 audit(1768397461.228:1002): pid=5916 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 13:31:01.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.26:22-10.0.0.1:59644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'