Jan 20 06:36:07.334372 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 04:11:16 -00 2026 Jan 20 06:36:07.334400 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:36:07.334413 kernel: BIOS-provided physical RAM map: Jan 20 06:36:07.334427 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 06:36:07.334436 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 06:36:07.334446 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 06:36:07.334647 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 06:36:07.334659 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 06:36:07.334668 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 06:36:07.334676 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 06:36:07.334685 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 20 06:36:07.334697 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 06:36:07.334709 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 06:36:07.334718 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 06:36:07.334728 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 06:36:07.334737 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 06:36:07.334750 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 06:36:07.334759 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 06:36:07.334768 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 06:36:07.334777 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 06:36:07.334786 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 06:36:07.334798 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 06:36:07.334807 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 06:36:07.334816 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 06:36:07.334825 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 06:36:07.334834 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 06:36:07.334948 kernel: NX (Execute Disable) protection: active Jan 20 06:36:07.334957 kernel: APIC: Static calls initialized Jan 20 06:36:07.334966 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 20 06:36:07.334976 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 20 06:36:07.334988 kernel: extended physical RAM map: Jan 20 06:36:07.334997 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 06:36:07.335006 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 06:36:07.335015 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 06:36:07.335024 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 06:36:07.335033 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 06:36:07.335042 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 06:36:07.335055 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 06:36:07.335065 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 20 06:36:07.335077 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 20 06:36:07.335091 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 20 06:36:07.335103 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 20 06:36:07.335113 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 20 06:36:07.335122 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 06:36:07.335132 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 06:36:07.335141 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 06:36:07.335154 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 06:36:07.335166 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 06:36:07.335175 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 06:36:07.335185 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 06:36:07.366721 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 06:36:07.366741 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 06:36:07.366750 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 06:36:07.366757 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 06:36:07.366764 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 06:36:07.366771 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 06:36:07.366778 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 06:36:07.366785 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 06:36:07.366793 kernel: efi: EFI v2.7 by EDK II Jan 20 06:36:07.366800 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 20 06:36:07.366807 kernel: random: crng init done Jan 20 06:36:07.366824 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 20 06:36:07.366832 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 20 06:36:07.366928 kernel: secureboot: Secure boot disabled Jan 20 06:36:07.366938 kernel: SMBIOS 2.8 present. Jan 20 06:36:07.366945 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 20 06:36:07.366952 kernel: DMI: Memory slots populated: 1/1 Jan 20 06:36:07.366959 kernel: Hypervisor detected: KVM Jan 20 06:36:07.366966 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 06:36:07.366974 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 06:36:07.366981 kernel: kvm-clock: using sched offset of 9466358117 cycles Jan 20 06:36:07.366988 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 06:36:07.367000 kernel: tsc: Detected 2445.426 MHz processor Jan 20 06:36:07.367008 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 06:36:07.367016 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 06:36:07.367023 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 06:36:07.367031 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 06:36:07.367039 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 06:36:07.367047 kernel: Using GB pages for direct mapping Jan 20 06:36:07.367057 kernel: ACPI: Early table checksum verification disabled Jan 20 06:36:07.367065 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 06:36:07.367072 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 06:36:07.367080 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:36:07.367087 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:36:07.367095 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 06:36:07.367105 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:36:07.367118 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:36:07.367135 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:36:07.367146 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 06:36:07.367156 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 06:36:07.367166 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 06:36:07.367177 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 06:36:07.367190 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 06:36:07.367205 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 06:36:07.367220 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 06:36:07.367230 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 06:36:07.367240 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 06:36:07.367251 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 06:36:07.367265 kernel: No NUMA configuration found Jan 20 06:36:07.367277 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 20 06:36:07.367287 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 20 06:36:07.367301 kernel: Zone ranges: Jan 20 06:36:07.367313 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 06:36:07.367327 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 20 06:36:07.367338 kernel: Normal empty Jan 20 06:36:07.367348 kernel: Device empty Jan 20 06:36:07.367358 kernel: Movable zone start for each node Jan 20 06:36:07.367371 kernel: Early memory node ranges Jan 20 06:36:07.367382 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 06:36:07.367396 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 06:36:07.367406 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 06:36:07.367419 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 20 06:36:07.367432 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 20 06:36:07.367443 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 20 06:36:07.367453 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 20 06:36:07.367634 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 20 06:36:07.367646 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 20 06:36:07.367654 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 06:36:07.367669 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 06:36:07.367679 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 06:36:07.367687 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 06:36:07.367694 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 20 06:36:07.367702 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 20 06:36:07.367710 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 20 06:36:07.367718 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 20 06:36:07.367725 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 20 06:36:07.367735 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 06:36:07.367743 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 06:36:07.367751 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 06:36:07.367759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 06:36:07.367769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 06:36:07.367776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 06:36:07.367784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 06:36:07.367792 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 06:36:07.367799 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 06:36:07.367807 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 06:36:07.367814 kernel: TSC deadline timer available Jan 20 06:36:07.367824 kernel: CPU topo: Max. logical packages: 1 Jan 20 06:36:07.367832 kernel: CPU topo: Max. logical dies: 1 Jan 20 06:36:07.367925 kernel: CPU topo: Max. dies per package: 1 Jan 20 06:36:07.367935 kernel: CPU topo: Max. threads per core: 1 Jan 20 06:36:07.367943 kernel: CPU topo: Num. cores per package: 4 Jan 20 06:36:07.367950 kernel: CPU topo: Num. threads per package: 4 Jan 20 06:36:07.367958 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 06:36:07.367966 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 06:36:07.367976 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 06:36:07.367984 kernel: kvm-guest: setup PV sched yield Jan 20 06:36:07.367992 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 20 06:36:07.368000 kernel: Booting paravirtualized kernel on KVM Jan 20 06:36:07.368008 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 06:36:07.368016 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 06:36:07.368024 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 06:36:07.368034 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 06:36:07.368042 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 06:36:07.368049 kernel: kvm-guest: PV spinlocks enabled Jan 20 06:36:07.368057 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 06:36:07.368066 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:36:07.368074 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 06:36:07.368084 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 06:36:07.368092 kernel: Fallback order for Node 0: 0 Jan 20 06:36:07.368100 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 20 06:36:07.368108 kernel: Policy zone: DMA32 Jan 20 06:36:07.368116 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 06:36:07.368123 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 06:36:07.368131 kernel: ftrace: allocating 40128 entries in 157 pages Jan 20 06:36:07.368139 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 06:36:07.368149 kernel: Dynamic Preempt: voluntary Jan 20 06:36:07.368156 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 06:36:07.368233 kernel: rcu: RCU event tracing is enabled. Jan 20 06:36:07.368242 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 06:36:07.368249 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 06:36:07.368257 kernel: Rude variant of Tasks RCU enabled. Jan 20 06:36:07.368265 kernel: Tracing variant of Tasks RCU enabled. Jan 20 06:36:07.368272 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 06:36:07.368282 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 06:36:07.368290 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 06:36:07.368298 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 06:36:07.368306 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 06:36:07.368314 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 06:36:07.368322 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 06:36:07.368329 kernel: Console: colour dummy device 80x25 Jan 20 06:36:07.368339 kernel: printk: legacy console [ttyS0] enabled Jan 20 06:36:07.368347 kernel: ACPI: Core revision 20240827 Jan 20 06:36:07.368355 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 06:36:07.368363 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 06:36:07.368370 kernel: x2apic enabled Jan 20 06:36:07.368378 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 06:36:07.368386 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 06:36:07.368396 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 06:36:07.368403 kernel: kvm-guest: setup PV IPIs Jan 20 06:36:07.368411 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 06:36:07.368419 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 06:36:07.368427 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 06:36:07.368435 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 06:36:07.368442 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 06:36:07.368452 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 06:36:07.368589 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 06:36:07.368597 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 06:36:07.368605 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 06:36:07.368613 kernel: Speculative Store Bypass: Vulnerable Jan 20 06:36:07.368620 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 06:36:07.368629 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 06:36:07.368640 kernel: active return thunk: srso_alias_return_thunk Jan 20 06:36:07.368648 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 06:36:07.368656 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 06:36:07.368664 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 06:36:07.368672 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 06:36:07.368680 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 06:36:07.368687 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 06:36:07.368697 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 06:36:07.368705 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 06:36:07.368712 kernel: Freeing SMP alternatives memory: 32K Jan 20 06:36:07.368720 kernel: pid_max: default: 32768 minimum: 301 Jan 20 06:36:07.368728 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 06:36:07.368735 kernel: landlock: Up and running. Jan 20 06:36:07.368743 kernel: SELinux: Initializing. Jan 20 06:36:07.368753 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 06:36:07.368761 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 06:36:07.368769 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 06:36:07.368777 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 06:36:07.368784 kernel: signal: max sigframe size: 1776 Jan 20 06:36:07.368792 kernel: rcu: Hierarchical SRCU implementation. Jan 20 06:36:07.368800 kernel: rcu: Max phase no-delay instances is 400. Jan 20 06:36:07.368809 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 06:36:07.368817 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 06:36:07.368825 kernel: smp: Bringing up secondary CPUs ... Jan 20 06:36:07.368832 kernel: smpboot: x86: Booting SMP configuration: Jan 20 06:36:07.368921 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 06:36:07.368930 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 06:36:07.368938 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 06:36:07.368950 kernel: Memory: 2439048K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120812K reserved, 0K cma-reserved) Jan 20 06:36:07.368958 kernel: devtmpfs: initialized Jan 20 06:36:07.368965 kernel: x86/mm: Memory block size: 128MB Jan 20 06:36:07.368973 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 06:36:07.368981 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 06:36:07.368989 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 20 06:36:07.368996 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 06:36:07.369007 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 20 06:36:07.369014 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 06:36:07.369022 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 06:36:07.369030 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 06:36:07.369038 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 06:36:07.369045 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 06:36:07.369053 kernel: audit: initializing netlink subsys (disabled) Jan 20 06:36:07.369063 kernel: audit: type=2000 audit(1768890957.326:1): state=initialized audit_enabled=0 res=1 Jan 20 06:36:07.369071 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 06:36:07.369079 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 06:36:07.369086 kernel: cpuidle: using governor menu Jan 20 06:36:07.369094 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 06:36:07.369102 kernel: dca service started, version 1.12.1 Jan 20 06:36:07.369110 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 20 06:36:07.369120 kernel: PCI: Using configuration type 1 for base access Jan 20 06:36:07.369127 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 06:36:07.369135 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 06:36:07.369143 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 06:36:07.369150 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 06:36:07.369158 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 06:36:07.369166 kernel: ACPI: Added _OSI(Module Device) Jan 20 06:36:07.369176 kernel: ACPI: Added _OSI(Processor Device) Jan 20 06:36:07.369183 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 06:36:07.369191 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 06:36:07.369198 kernel: ACPI: Interpreter enabled Jan 20 06:36:07.369206 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 06:36:07.369214 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 06:36:07.369222 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 06:36:07.369231 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 06:36:07.369239 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 06:36:07.369247 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 06:36:07.369730 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 06:36:07.370099 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 06:36:07.370342 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 06:36:07.370364 kernel: PCI host bridge to bus 0000:00 Jan 20 06:36:07.370788 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 06:36:07.371099 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 06:36:07.371317 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 06:36:07.371669 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 20 06:36:07.371974 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 20 06:36:07.372170 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 20 06:36:07.372371 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 06:36:07.372783 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 06:36:07.373117 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 06:36:07.373352 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 20 06:36:07.373705 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 20 06:36:07.374027 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 20 06:36:07.374245 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 06:36:07.374438 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 13671 usecs Jan 20 06:36:07.374820 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 06:36:07.375116 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 20 06:36:07.375341 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 20 06:36:07.375684 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 20 06:36:07.376011 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 06:36:07.376228 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 20 06:36:07.376430 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 20 06:36:07.376990 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 20 06:36:07.377305 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 06:36:07.377740 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 20 06:36:07.378035 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 20 06:36:07.378261 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 20 06:36:07.378453 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 20 06:36:07.378955 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 06:36:07.379198 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 06:36:07.379428 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 13671 usecs Jan 20 06:36:07.379996 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 06:36:07.380245 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 20 06:36:07.380667 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 20 06:36:07.381014 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 06:36:07.381244 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 20 06:36:07.381261 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 06:36:07.381272 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 06:36:07.381283 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 06:36:07.381294 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 06:36:07.381310 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 06:36:07.381322 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 06:36:07.381335 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 06:36:07.381349 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 06:36:07.381360 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 06:36:07.381371 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 06:36:07.381381 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 06:36:07.381395 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 06:36:07.381406 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 06:36:07.381419 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 06:36:07.381432 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 06:36:07.381442 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 06:36:07.381453 kernel: iommu: Default domain type: Translated Jan 20 06:36:07.381623 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 06:36:07.381638 kernel: efivars: Registered efivars operations Jan 20 06:36:07.381652 kernel: PCI: Using ACPI for IRQ routing Jan 20 06:36:07.381664 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 06:36:07.381675 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 06:36:07.381685 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 20 06:36:07.381696 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 20 06:36:07.381706 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 20 06:36:07.381722 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 20 06:36:07.381734 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 20 06:36:07.381745 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 20 06:36:07.381756 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 20 06:36:07.382081 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 06:36:07.382309 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 06:36:07.382781 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 06:36:07.382807 kernel: vgaarb: loaded Jan 20 06:36:07.382818 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 06:36:07.382829 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 06:36:07.382936 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 06:36:07.382949 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 06:36:07.382960 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 06:36:07.382974 kernel: pnp: PnP ACPI init Jan 20 06:36:07.383222 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 20 06:36:07.383244 kernel: pnp: PnP ACPI: found 6 devices Jan 20 06:36:07.383256 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 06:36:07.383266 kernel: NET: Registered PF_INET protocol family Jan 20 06:36:07.383277 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 06:36:07.383288 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 06:36:07.383320 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 06:36:07.383336 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 06:36:07.383347 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 06:36:07.383358 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 06:36:07.383370 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 06:36:07.383381 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 06:36:07.383392 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 06:36:07.383409 kernel: NET: Registered PF_XDP protocol family Jan 20 06:36:07.383974 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 20 06:36:07.384203 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 20 06:36:07.384417 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 06:36:07.384836 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 06:36:07.385163 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 06:36:07.385382 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 20 06:36:07.385970 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 20 06:36:07.386183 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 20 06:36:07.386204 kernel: PCI: CLS 0 bytes, default 64 Jan 20 06:36:07.386218 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 06:36:07.386229 kernel: Initialise system trusted keyrings Jan 20 06:36:07.386240 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 06:36:07.386257 kernel: Key type asymmetric registered Jan 20 06:36:07.386268 kernel: Asymmetric key parser 'x509' registered Jan 20 06:36:07.386279 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 06:36:07.386292 kernel: io scheduler mq-deadline registered Jan 20 06:36:07.386305 kernel: io scheduler kyber registered Jan 20 06:36:07.386319 kernel: io scheduler bfq registered Jan 20 06:36:07.386328 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 06:36:07.386338 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 06:36:07.386349 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 06:36:07.386357 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 06:36:07.386365 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 06:36:07.386374 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 06:36:07.386385 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 06:36:07.386393 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 06:36:07.386402 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 06:36:07.386742 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 06:36:07.386756 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 06:36:07.387064 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 06:36:07.387292 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T06:36:02 UTC (1768890962) Jan 20 06:36:07.387704 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 20 06:36:07.387725 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 06:36:07.387739 kernel: efifb: probing for efifb Jan 20 06:36:07.387752 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 20 06:36:07.387765 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 20 06:36:07.387778 kernel: efifb: scrolling: redraw Jan 20 06:36:07.387795 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 06:36:07.387808 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 06:36:07.387820 kernel: fb0: EFI VGA frame buffer device Jan 20 06:36:07.387836 kernel: pstore: Using crash dump compression: deflate Jan 20 06:36:07.387948 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 06:36:07.387960 kernel: NET: Registered PF_INET6 protocol family Jan 20 06:36:07.387973 kernel: Segment Routing with IPv6 Jan 20 06:36:07.387985 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 06:36:07.388002 kernel: NET: Registered PF_PACKET protocol family Jan 20 06:36:07.388016 kernel: Key type dns_resolver registered Jan 20 06:36:07.388029 kernel: IPI shorthand broadcast: enabled Jan 20 06:36:07.388043 kernel: sched_clock: Marking stable (4944082274, 713154744)->(6143894663, -486657645) Jan 20 06:36:07.388057 kernel: registered taskstats version 1 Jan 20 06:36:07.388070 kernel: Loading compiled-in X.509 certificates Jan 20 06:36:07.388084 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3e9049adf8f1d71dd06c731465288f6e1d353052' Jan 20 06:36:07.388104 kernel: Demotion targets for Node 0: null Jan 20 06:36:07.388116 kernel: Key type .fscrypt registered Jan 20 06:36:07.388127 kernel: Key type fscrypt-provisioning registered Jan 20 06:36:07.388138 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 06:36:07.388153 kernel: ima: Allocated hash algorithm: sha1 Jan 20 06:36:07.388164 kernel: ima: No architecture policies found Jan 20 06:36:07.388175 kernel: clk: Disabling unused clocks Jan 20 06:36:07.388191 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 20 06:36:07.388205 kernel: Write protecting the kernel read-only data: 47104k Jan 20 06:36:07.388217 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 20 06:36:07.388228 kernel: Run /init as init process Jan 20 06:36:07.388238 kernel: with arguments: Jan 20 06:36:07.388249 kernel: /init Jan 20 06:36:07.388260 kernel: with environment: Jan 20 06:36:07.388275 kernel: HOME=/ Jan 20 06:36:07.388289 kernel: TERM=linux Jan 20 06:36:07.388302 kernel: SCSI subsystem initialized Jan 20 06:36:07.388313 kernel: libata version 3.00 loaded. Jan 20 06:36:07.388768 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 06:36:07.388787 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 06:36:07.389105 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 06:36:07.389342 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 06:36:07.389735 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 06:36:07.390093 kernel: scsi host0: ahci Jan 20 06:36:07.390343 kernel: scsi host1: ahci Jan 20 06:36:07.390754 kernel: scsi host2: ahci Jan 20 06:36:07.391095 kernel: scsi host3: ahci Jan 20 06:36:07.391349 kernel: scsi host4: ahci Jan 20 06:36:07.391819 kernel: scsi host5: ahci Jan 20 06:36:07.391932 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 20 06:36:07.391949 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 20 06:36:07.391962 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 20 06:36:07.391978 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 20 06:36:07.391990 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 20 06:36:07.392001 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 20 06:36:07.392012 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 06:36:07.392023 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 06:36:07.392037 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 06:36:07.392048 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 06:36:07.392063 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 06:36:07.392074 kernel: ata3.00: applying bridge limits Jan 20 06:36:07.392085 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 06:36:07.392097 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 06:36:07.392112 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 06:36:07.392123 kernel: ata3.00: configured for UDMA/100 Jan 20 06:36:07.392134 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 06:36:07.392401 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 06:36:07.392836 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 06:36:07.393186 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 20 06:36:07.393207 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 06:36:07.393629 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 06:36:07.393650 kernel: GPT:16515071 != 27000831 Jan 20 06:36:07.393670 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 06:36:07.393683 kernel: GPT:16515071 != 27000831 Jan 20 06:36:07.393696 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 06:36:07.393712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 06:36:07.393724 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 06:36:07.394072 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 06:36:07.394088 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 06:36:07.394110 kernel: device-mapper: uevent: version 1.0.3 Jan 20 06:36:07.394124 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 06:36:07.394136 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 06:36:07.394147 kernel: raid6: avx2x4 gen() 28113 MB/s Jan 20 06:36:07.394158 kernel: raid6: avx2x2 gen() 29386 MB/s Jan 20 06:36:07.394170 kernel: raid6: avx2x1 gen() 20676 MB/s Jan 20 06:36:07.394181 kernel: raid6: using algorithm avx2x2 gen() 29386 MB/s Jan 20 06:36:07.394197 kernel: raid6: .... xor() 19434 MB/s, rmw enabled Jan 20 06:36:07.394213 kernel: raid6: using avx2x2 recovery algorithm Jan 20 06:36:07.394224 kernel: xor: automatically using best checksumming function avx Jan 20 06:36:07.394236 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 06:36:07.394247 kernel: BTRFS: device fsid 98f50efd-4872-4dd8-af35-5e494490b9aa devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (181) Jan 20 06:36:07.394259 kernel: BTRFS info (device dm-0): first mount of filesystem 98f50efd-4872-4dd8-af35-5e494490b9aa Jan 20 06:36:07.394270 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:36:07.394285 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 06:36:07.394298 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 06:36:07.394312 kernel: loop: module loaded Jan 20 06:36:07.394325 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 06:36:07.394335 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 06:36:07.394345 systemd[1]: Successfully made /usr/ read-only. Jan 20 06:36:07.394356 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 06:36:07.394368 systemd[1]: Detected virtualization kvm. Jan 20 06:36:07.394376 systemd[1]: Detected architecture x86-64. Jan 20 06:36:07.394384 systemd[1]: Running in initrd. Jan 20 06:36:07.394392 systemd[1]: No hostname configured, using default hostname. Jan 20 06:36:07.394401 systemd[1]: Hostname set to . Jan 20 06:36:07.394409 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 06:36:07.394420 systemd[1]: Queued start job for default target initrd.target. Jan 20 06:36:07.394428 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 06:36:07.394437 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:36:07.394446 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:36:07.394600 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 06:36:07.394611 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 06:36:07.394623 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 06:36:07.394632 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 06:36:07.394641 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:36:07.394649 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:36:07.394658 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 06:36:07.394666 systemd[1]: Reached target paths.target - Path Units. Jan 20 06:36:07.394677 systemd[1]: Reached target slices.target - Slice Units. Jan 20 06:36:07.394685 systemd[1]: Reached target swap.target - Swaps. Jan 20 06:36:07.394694 systemd[1]: Reached target timers.target - Timer Units. Jan 20 06:36:07.394702 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 06:36:07.394710 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 06:36:07.394720 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:36:07.394735 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 06:36:07.394753 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 06:36:07.394766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:36:07.394778 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 06:36:07.394791 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:36:07.394807 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 06:36:07.394819 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 06:36:07.394831 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 06:36:07.394941 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 06:36:07.394953 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 06:36:07.394972 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 06:36:07.394986 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 06:36:07.394995 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 06:36:07.395004 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 06:36:07.395015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:36:07.395023 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 06:36:07.395064 systemd-journald[321]: Collecting audit messages is enabled. Jan 20 06:36:07.395090 kernel: audit: type=1130 audit(1768890967.335:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.395103 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:36:07.395112 kernel: audit: type=1130 audit(1768890967.388:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.395120 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 06:36:07.395132 systemd-journald[321]: Journal started Jan 20 06:36:07.395148 systemd-journald[321]: Runtime Journal (/run/log/journal/7a85e3a24e404275b66ccda0f29f1aa7) is 6M, max 48M, 42M free. Jan 20 06:36:07.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.447987 kernel: audit: type=1130 audit(1768890967.423:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.448049 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 06:36:07.471142 kernel: audit: type=1130 audit(1768890967.469:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.475104 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 06:36:07.524655 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 06:36:07.571252 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 06:36:07.650008 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 06:36:07.650045 kernel: Bridge firewalling registered Jan 20 06:36:07.650057 kernel: audit: type=1130 audit(1768890967.600:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.603749 systemd-tmpfiles[331]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 06:36:07.609674 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 06:36:07.633637 systemd-modules-load[324]: Inserted module 'br_netfilter' Jan 20 06:36:07.703183 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 06:36:07.760101 kernel: audit: type=1130 audit(1768890967.717:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.746057 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:36:07.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.815784 kernel: audit: type=1130 audit(1768890967.790:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.816155 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:36:07.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.874099 kernel: audit: type=1130 audit(1768890967.850:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.879240 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:36:07.939038 kernel: audit: type=1130 audit(1768890967.880:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:07.944271 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 06:36:07.958716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 06:36:08.068703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:36:08.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:08.073063 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 06:36:08.132377 kernel: audit: type=1130 audit(1768890968.068:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:08.070000 audit: BPF prog-id=6 op=LOAD Jan 20 06:36:08.163738 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 06:36:08.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:08.248391 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 06:36:08.358326 dracut-cmdline[360]: dracut-109 Jan 20 06:36:08.384411 dracut-cmdline[360]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:36:08.388159 systemd-resolved[356]: Positive Trust Anchors: Jan 20 06:36:08.388169 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 06:36:08.388173 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 06:36:08.388199 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 06:36:08.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:08.481423 systemd-resolved[356]: Defaulting to hostname 'linux'. Jan 20 06:36:08.485339 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 06:36:08.495302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:36:08.958847 kernel: Loading iSCSI transport class v2.0-870. Jan 20 06:36:09.006840 kernel: iscsi: registered transport (tcp) Jan 20 06:36:09.084445 kernel: iscsi: registered transport (qla4xxx) Jan 20 06:36:09.085115 kernel: QLogic iSCSI HBA Driver Jan 20 06:36:09.197403 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 06:36:09.289209 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:36:09.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:09.297033 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 06:36:09.534957 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 06:36:09.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:09.551078 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 06:36:09.567218 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 06:36:09.688126 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 06:36:09.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:09.715000 audit: BPF prog-id=7 op=LOAD Jan 20 06:36:09.716000 audit: BPF prog-id=8 op=LOAD Jan 20 06:36:09.724166 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:36:09.810082 systemd-udevd[595]: Using default interface naming scheme 'v257'. Jan 20 06:36:09.858644 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:36:09.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:09.877216 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 06:36:09.985340 dracut-pre-trigger[639]: rd.md=0: removing MD RAID activation Jan 20 06:36:10.104365 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 06:36:10.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:10.107422 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 06:36:10.174813 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 06:36:10.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:10.191000 audit: BPF prog-id=9 op=LOAD Jan 20 06:36:10.205337 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 06:36:10.327183 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:36:10.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:10.377787 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 06:36:10.452030 systemd-networkd[728]: lo: Link UP Jan 20 06:36:10.452042 systemd-networkd[728]: lo: Gained carrier Jan 20 06:36:10.459030 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 06:36:10.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:10.485136 systemd[1]: Reached target network.target - Network. Jan 20 06:36:10.547997 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 06:36:10.589440 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 06:36:10.644998 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 06:36:10.677787 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 06:36:10.713331 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 06:36:10.744827 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 06:36:10.826619 disk-uuid[770]: Primary Header is updated. Jan 20 06:36:10.826619 disk-uuid[770]: Secondary Entries is updated. Jan 20 06:36:10.826619 disk-uuid[770]: Secondary Header is updated. Jan 20 06:36:10.867165 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:36:10.868708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:36:10.889343 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:36:10.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:10.937229 systemd-networkd[728]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:36:10.937235 systemd-networkd[728]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 06:36:11.074125 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 06:36:11.074149 kernel: AES CTR mode by8 optimization enabled Jan 20 06:36:10.943367 systemd-networkd[728]: eth0: Link UP Jan 20 06:36:10.943830 systemd-networkd[728]: eth0: Gained carrier Jan 20 06:36:11.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:11.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:10.943845 systemd-networkd[728]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:36:10.992048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:36:11.082850 systemd-networkd[728]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 06:36:11.085384 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:36:11.085828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:36:11.117772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:36:11.323832 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:36:11.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:11.361201 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 06:36:11.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:11.393332 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 06:36:11.394220 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:36:11.418827 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 06:36:11.444028 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 06:36:11.533763 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 06:36:11.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:12.140436 systemd-networkd[728]: eth0: Gained IPv6LL Jan 20 06:36:12.164835 disk-uuid[771]: Warning: The kernel is still using the old partition table. Jan 20 06:36:12.164835 disk-uuid[771]: The new table will be used at the next reboot or after you Jan 20 06:36:12.164835 disk-uuid[771]: run partprobe(8) or kpartx(8) Jan 20 06:36:12.164835 disk-uuid[771]: The operation has completed successfully. Jan 20 06:36:12.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:12.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:12.185333 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 06:36:12.185766 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 06:36:12.193651 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 06:36:12.334802 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Jan 20 06:36:12.350022 kernel: BTRFS info (device vda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:36:12.350083 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:36:12.406641 kernel: BTRFS info (device vda6): turning on async discard Jan 20 06:36:12.406708 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 06:36:12.445784 kernel: BTRFS info (device vda6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:36:12.455670 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 06:36:12.507303 kernel: kauditd_printk_skb: 22 callbacks suppressed Jan 20 06:36:12.507346 kernel: audit: type=1130 audit(1768890972.467:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:12.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:12.472633 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 06:36:13.824277 ignition[882]: Ignition 2.24.0 Jan 20 06:36:13.824358 ignition[882]: Stage: fetch-offline Jan 20 06:36:13.824399 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:36:13.824411 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 06:36:13.898781 ignition[882]: parsed url from cmdline: "" Jan 20 06:36:13.902722 ignition[882]: no config URL provided Jan 20 06:36:13.916666 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 06:36:13.925782 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 20 06:36:13.932424 ignition[882]: op(1): [started] loading QEMU firmware config module Jan 20 06:36:13.934722 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 06:36:14.288392 ignition[882]: op(1): [finished] loading QEMU firmware config module Jan 20 06:36:15.564077 ignition[882]: parsing config with SHA512: 4646d5350573d502a5885a690c5d9559ed142107806458155739f2478a49af21b283eab741a6eae0e54f2b607a0e096db0ca5ed4f2efbdb4b26b2596dc489d2f Jan 20 06:36:15.641071 unknown[882]: fetched base config from "system" Jan 20 06:36:15.641177 unknown[882]: fetched user config from "qemu" Jan 20 06:36:15.663078 ignition[882]: fetch-offline: fetch-offline passed Jan 20 06:36:15.675234 ignition[882]: Ignition finished successfully Jan 20 06:36:15.691659 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 06:36:15.739890 kernel: audit: type=1130 audit(1768890975.705:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:15.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:15.706038 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 06:36:15.709855 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 06:36:16.108344 kernel: hrtimer: interrupt took 3057507 ns Jan 20 06:36:16.297866 ignition[894]: Ignition 2.24.0 Jan 20 06:36:16.298059 ignition[894]: Stage: kargs Jan 20 06:36:16.299430 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:36:16.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:16.322767 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 06:36:16.379750 kernel: audit: type=1130 audit(1768890976.344:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:16.299441 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 06:36:16.348273 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 06:36:16.306345 ignition[894]: kargs: kargs passed Jan 20 06:36:16.306419 ignition[894]: Ignition finished successfully Jan 20 06:36:16.997817 ignition[902]: Ignition 2.24.0 Jan 20 06:36:16.997903 ignition[902]: Stage: disks Jan 20 06:36:16.998644 ignition[902]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:36:16.998660 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 06:36:17.048181 ignition[902]: disks: disks passed Jan 20 06:36:17.048403 ignition[902]: Ignition finished successfully Jan 20 06:36:17.067346 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 06:36:17.118915 kernel: audit: type=1130 audit(1768890977.078:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:17.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:17.079167 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 06:36:17.120812 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 06:36:17.146270 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 06:36:17.202852 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 06:36:17.219118 systemd[1]: Reached target basic.target - Basic System. Jan 20 06:36:17.257749 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 06:36:17.436395 systemd-fsck[912]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 06:36:17.457808 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 06:36:17.512816 kernel: audit: type=1130 audit(1768890977.467:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:17.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:17.472747 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 06:36:18.053060 kernel: EXT4-fs (vda9): mounted filesystem cccfbfd8-bb77-4a2f-9af9-c87f4957b904 r/w with ordered data mode. Quota mode: none. Jan 20 06:36:18.054795 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 06:36:18.072833 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 06:36:18.104251 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 06:36:18.149429 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 06:36:18.173201 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 06:36:18.173341 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 06:36:18.173371 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 06:36:18.229230 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 06:36:18.283043 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (921) Jan 20 06:36:18.284825 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 06:36:18.333810 kernel: BTRFS info (device vda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:36:18.333844 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:36:18.360442 kernel: BTRFS info (device vda6): turning on async discard Jan 20 06:36:18.360702 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 06:36:18.364124 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 06:36:18.995830 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 06:36:19.061352 kernel: audit: type=1130 audit(1768890979.007:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:19.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:19.011701 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 06:36:19.079648 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 06:36:19.109092 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 06:36:19.135378 kernel: BTRFS info (device vda6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:36:19.236270 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 06:36:19.284218 kernel: audit: type=1130 audit(1768890979.236:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:19.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:20.934150 ignition[1018]: INFO : Ignition 2.24.0 Jan 20 06:36:20.934150 ignition[1018]: INFO : Stage: mount Jan 20 06:36:20.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:20.970166 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:36:20.970166 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 06:36:20.970166 ignition[1018]: INFO : mount: mount passed Jan 20 06:36:20.970166 ignition[1018]: INFO : Ignition finished successfully Jan 20 06:36:21.057194 kernel: audit: type=1130 audit(1768890980.969:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:20.947422 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 06:36:20.974292 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 06:36:21.110316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 06:36:21.192950 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1031) Jan 20 06:36:21.218883 kernel: BTRFS info (device vda6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:36:21.218967 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:36:21.256724 kernel: BTRFS info (device vda6): turning on async discard Jan 20 06:36:21.256799 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 06:36:21.261922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 06:36:21.779934 ignition[1048]: INFO : Ignition 2.24.0 Jan 20 06:36:21.779934 ignition[1048]: INFO : Stage: files Jan 20 06:36:21.779934 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:36:21.779934 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 06:36:21.830619 ignition[1048]: DEBUG : files: compiled without relabeling support, skipping Jan 20 06:36:21.848902 ignition[1048]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 06:36:21.848902 ignition[1048]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 06:36:21.881288 ignition[1048]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 06:36:21.895089 ignition[1048]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 06:36:21.924864 unknown[1048]: wrote ssh authorized keys file for user: core Jan 20 06:36:21.937880 ignition[1048]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 06:36:21.937880 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 06:36:21.937880 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 06:36:22.170905 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 06:36:22.687840 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 06:36:22.687840 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:36:22.743874 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 06:36:23.551615 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 06:36:30.369685 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:36:30.369685 ignition[1048]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 06:36:30.409007 ignition[1048]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 06:36:30.409007 ignition[1048]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 06:36:30.409007 ignition[1048]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 06:36:30.409007 ignition[1048]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 06:36:30.409007 ignition[1048]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 06:36:30.409007 ignition[1048]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 06:36:30.409007 ignition[1048]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 06:36:30.409007 ignition[1048]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 06:36:30.631428 ignition[1048]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 06:36:30.676010 ignition[1048]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 06:36:30.696737 ignition[1048]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 06:36:30.696737 ignition[1048]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 06:36:30.696737 ignition[1048]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 06:36:30.696737 ignition[1048]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 06:36:30.696737 ignition[1048]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 06:36:30.696737 ignition[1048]: INFO : files: files passed Jan 20 06:36:30.696737 ignition[1048]: INFO : Ignition finished successfully Jan 20 06:36:30.858815 kernel: audit: type=1130 audit(1768890990.771:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:30.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:30.756020 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 06:36:30.777809 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 06:36:30.873862 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 06:36:30.917046 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 06:36:30.927924 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 06:36:30.997766 kernel: audit: type=1130 audit(1768890990.940:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:30.997810 kernel: audit: type=1131 audit(1768890990.940:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:30.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:30.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:30.997917 initrd-setup-root-after-ignition[1079]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 06:36:31.019887 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:36:31.019887 initrd-setup-root-after-ignition[1082]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:36:31.097011 kernel: audit: type=1130 audit(1768890991.050:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:31.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:31.097206 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:36:31.027787 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 06:36:31.051892 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 06:36:31.114695 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 06:36:31.586772 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 06:36:31.587147 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 06:36:31.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:31.639754 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 06:36:31.675180 kernel: audit: type=1130 audit(1768890991.633:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:31.675413 kernel: audit: type=1131 audit(1768890991.633:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:31.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:31.759182 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 06:36:31.774238 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 06:36:31.801761 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 06:36:32.188218 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 06:36:32.287293 kernel: audit: type=1130 audit(1768890992.214:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:32.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:32.247810 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 06:36:32.381188 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 06:36:32.381935 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:36:32.407346 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:36:32.532820 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 06:36:32.542345 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 06:36:32.633657 kernel: audit: type=1131 audit(1768890992.559:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:32.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:32.544627 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 06:36:32.672205 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 06:36:32.673446 systemd[1]: Stopped target basic.target - Basic System. Jan 20 06:36:32.752339 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 06:36:32.775944 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 06:36:32.802819 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 06:36:32.842664 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 06:36:32.862337 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 06:36:32.893426 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 06:36:32.940375 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 06:36:32.992046 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 06:36:33.017277 systemd[1]: Stopped target swap.target - Swaps. Jan 20 06:36:33.018234 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 06:36:33.018865 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 06:36:33.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.096335 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:36:33.131962 kernel: audit: type=1131 audit(1768890993.069:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.096794 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:36:33.133047 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 06:36:33.134920 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:36:33.253401 kernel: audit: type=1131 audit(1768890993.212:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.171450 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 06:36:33.171789 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 06:36:33.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.253943 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 06:36:33.254377 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 06:36:33.264710 systemd[1]: Stopped target paths.target - Path Units. Jan 20 06:36:33.300445 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 06:36:33.302010 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:36:33.316338 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 06:36:33.358843 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 06:36:33.405389 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 06:36:33.405742 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 06:36:33.416364 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 06:36:33.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.416974 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 06:36:33.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.437406 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 06:36:33.437863 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:36:33.460193 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 06:36:33.460683 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 06:36:33.484419 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 06:36:33.484812 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 06:36:33.616770 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 06:36:33.641837 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 06:36:33.652926 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 06:36:33.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.653418 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:36:33.678443 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 06:36:33.678920 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:36:33.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.743976 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 06:36:33.744276 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 06:36:33.772344 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 06:36:33.782919 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 06:36:33.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.889315 ignition[1106]: INFO : Ignition 2.24.0 Jan 20 06:36:33.889315 ignition[1106]: INFO : Stage: umount Jan 20 06:36:33.905399 ignition[1106]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:36:33.905399 ignition[1106]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 06:36:33.905399 ignition[1106]: INFO : umount: umount passed Jan 20 06:36:33.905399 ignition[1106]: INFO : Ignition finished successfully Jan 20 06:36:33.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.890778 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 06:36:33.898252 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 06:36:34.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.898403 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 06:36:34.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.907708 systemd[1]: Stopped target network.target - Network. Jan 20 06:36:34.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:33.960198 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 06:36:33.961005 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 06:36:33.969887 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 06:36:33.970041 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 06:36:34.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:34.014028 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 06:36:34.014427 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 06:36:34.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:34.038866 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 06:36:34.039016 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 06:36:34.053639 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 06:36:34.089951 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 06:36:34.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:34.127211 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 06:36:34.128385 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 06:36:34.159051 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 06:36:34.368000 audit: BPF prog-id=6 op=UNLOAD Jan 20 06:36:34.369000 audit: BPF prog-id=9 op=UNLOAD Jan 20 06:36:34.160967 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 06:36:34.254201 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 06:36:34.255767 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 06:36:34.370057 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 06:36:34.390888 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 06:36:34.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:34.397935 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:36:34.443782 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 06:36:34.443955 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 06:36:34.582996 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 06:36:34.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:34.646655 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 06:36:34.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:34.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:34.646995 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 06:36:34.700448 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 06:36:34.700783 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:36:34.730267 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 06:36:34.730390 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 06:36:34.748362 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:36:34.849302 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 06:36:34.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:34.849894 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:36:34.906661 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 06:36:34.906782 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 06:36:34.930644 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 06:36:34.930785 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:36:34.968925 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 06:36:35.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:34.969041 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 06:36:35.049398 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 06:36:35.049776 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 06:36:35.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:35.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:35.072392 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 06:36:35.072847 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 06:36:35.172306 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 06:36:35.173016 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 06:36:35.173226 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:36:35.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:35.210288 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 06:36:35.210394 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:36:35.267011 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 06:36:35.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:35.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:35.267233 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 06:36:35.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:35.307221 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 06:36:35.307343 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:36:35.336370 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:36:35.336804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:36:35.397395 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 06:36:35.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:35.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:35.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:35.397746 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 06:36:35.534774 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 06:36:35.535357 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 06:36:35.563363 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 06:36:35.576052 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 06:36:35.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:35.679300 systemd[1]: Switching root. Jan 20 06:36:35.753967 systemd-journald[321]: Journal stopped Jan 20 06:36:40.002345 systemd-journald[321]: Received SIGTERM from PID 1 (systemd). Jan 20 06:36:40.002424 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 06:36:40.002444 kernel: SELinux: policy capability open_perms=1 Jan 20 06:36:40.002668 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 06:36:40.002686 kernel: SELinux: policy capability always_check_network=0 Jan 20 06:36:40.002703 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 06:36:40.002723 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 06:36:40.002737 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 06:36:40.002748 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 06:36:40.002759 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 06:36:40.002773 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 20 06:36:40.002792 kernel: audit: type=1403 audit(1768890996.228:87): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 06:36:40.002812 systemd[1]: Successfully loaded SELinux policy in 232.862ms. Jan 20 06:36:40.002834 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.114ms. Jan 20 06:36:40.002847 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 06:36:40.002859 systemd[1]: Detected virtualization kvm. Jan 20 06:36:40.002871 systemd[1]: Detected architecture x86-64. Jan 20 06:36:40.002884 systemd[1]: Detected first boot. Jan 20 06:36:40.002896 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 06:36:40.002907 kernel: audit: type=1334 audit(1768890996.471:88): prog-id=10 op=LOAD Jan 20 06:36:40.002918 kernel: audit: type=1334 audit(1768890996.471:89): prog-id=10 op=UNLOAD Jan 20 06:36:40.002929 kernel: audit: type=1334 audit(1768890996.471:90): prog-id=11 op=LOAD Jan 20 06:36:40.002940 kernel: audit: type=1334 audit(1768890996.471:91): prog-id=11 op=UNLOAD Jan 20 06:36:40.002951 zram_generator::config[1151]: No configuration found. Jan 20 06:36:40.002966 kernel: Guest personality initialized and is inactive Jan 20 06:36:40.002977 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 06:36:40.002990 kernel: Initialized host personality Jan 20 06:36:40.003001 kernel: NET: Registered PF_VSOCK protocol family Jan 20 06:36:40.003012 systemd[1]: Populated /etc with preset unit settings. Jan 20 06:36:40.003023 kernel: audit: type=1334 audit(1768890998.166:92): prog-id=12 op=LOAD Jan 20 06:36:40.003034 kernel: audit: type=1334 audit(1768890998.166:93): prog-id=3 op=UNLOAD Jan 20 06:36:40.003051 kernel: audit: type=1334 audit(1768890998.166:94): prog-id=13 op=LOAD Jan 20 06:36:40.003064 kernel: audit: type=1334 audit(1768890998.166:95): prog-id=14 op=LOAD Jan 20 06:36:40.003075 kernel: audit: type=1334 audit(1768890998.166:96): prog-id=4 op=UNLOAD Jan 20 06:36:40.003086 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 06:36:40.003100 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 06:36:40.003112 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 06:36:40.003226 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 06:36:40.003242 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 06:36:40.003254 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 06:36:40.003266 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 06:36:40.003280 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 06:36:40.003296 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 06:36:40.003308 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 06:36:40.003319 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 06:36:40.003331 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:36:40.003343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:36:40.003358 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 06:36:40.003370 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 06:36:40.003383 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 06:36:40.003394 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 06:36:40.003406 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 06:36:40.003418 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:36:40.003430 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:36:40.003443 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 06:36:40.003455 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 06:36:40.003649 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 06:36:40.003668 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 06:36:40.003685 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:36:40.003701 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 06:36:40.003719 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 06:36:40.003740 systemd[1]: Reached target slices.target - Slice Units. Jan 20 06:36:40.003760 systemd[1]: Reached target swap.target - Swaps. Jan 20 06:36:40.003777 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 06:36:40.003794 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 06:36:40.003810 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 06:36:40.003831 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:36:40.003848 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 06:36:40.003867 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:36:40.003887 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 06:36:40.003907 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 06:36:40.003926 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 06:36:40.003938 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:36:40.003950 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 06:36:40.003962 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 06:36:40.003974 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 06:36:40.003990 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 06:36:40.004002 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:36:40.004014 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 06:36:40.004025 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 06:36:40.004037 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 06:36:40.004049 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 06:36:40.004063 systemd[1]: Reached target machines.target - Containers. Jan 20 06:36:40.004074 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 06:36:40.004087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:36:40.004098 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 06:36:40.004110 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 06:36:40.004217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:36:40.004232 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 06:36:40.004248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:36:40.004260 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 06:36:40.004271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:36:40.004283 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 06:36:40.004294 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 06:36:40.004306 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 06:36:40.004318 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 06:36:40.004332 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 06:36:40.004344 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:36:40.004356 kernel: ACPI: bus type drm_connector registered Jan 20 06:36:40.004368 kernel: fuse: init (API version 7.41) Jan 20 06:36:40.004379 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 06:36:40.004393 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 06:36:40.004405 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 06:36:40.004416 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 06:36:40.004450 systemd-journald[1237]: Collecting audit messages is enabled. Jan 20 06:36:40.004637 systemd-journald[1237]: Journal started Jan 20 06:36:40.004664 systemd-journald[1237]: Runtime Journal (/run/log/journal/7a85e3a24e404275b66ccda0f29f1aa7) is 6M, max 48M, 42M free. Jan 20 06:36:38.995000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 06:36:39.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:39.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:39.824000 audit: BPF prog-id=14 op=UNLOAD Jan 20 06:36:39.825000 audit: BPF prog-id=13 op=UNLOAD Jan 20 06:36:39.852000 audit: BPF prog-id=15 op=LOAD Jan 20 06:36:39.858000 audit: BPF prog-id=16 op=LOAD Jan 20 06:36:39.858000 audit: BPF prog-id=17 op=LOAD Jan 20 06:36:39.998000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 06:36:39.998000 audit[1237]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffd16ec810 a2=4000 a3=0 items=0 ppid=1 pid=1237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:36:39.998000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 06:36:38.139240 systemd[1]: Queued start job for default target multi-user.target. Jan 20 06:36:38.167672 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 06:36:38.169821 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 06:36:38.170380 systemd[1]: systemd-journald.service: Consumed 4.011s CPU time. Jan 20 06:36:40.038709 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 06:36:40.070637 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 06:36:40.102739 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:36:40.132951 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 06:36:40.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.135032 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 06:36:40.149028 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 06:36:40.163239 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 06:36:40.176010 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 06:36:40.189866 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 06:36:40.204043 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 06:36:40.215736 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 06:36:40.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.232102 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:36:40.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.249807 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 06:36:40.251020 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 06:36:40.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.266833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:36:40.267250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:36:40.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.282373 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 06:36:40.282920 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 06:36:40.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.297746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:36:40.298305 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:36:40.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.317284 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 06:36:40.318055 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 06:36:40.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.334089 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:36:40.335397 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:36:40.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.351767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 06:36:40.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.369058 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:36:40.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.386296 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 06:36:40.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.404078 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 06:36:40.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.421122 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:36:40.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.457329 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 06:36:40.469755 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 06:36:40.485743 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 06:36:40.509766 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 06:36:40.521967 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 06:36:40.522226 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 06:36:40.533868 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 06:36:40.546105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:36:40.546363 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:36:40.568672 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 06:36:40.583863 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 06:36:40.599256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 06:36:40.603080 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 06:36:40.621103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 06:36:40.632360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 06:36:40.646361 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 06:36:40.662951 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 06:36:40.680299 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 06:36:40.686710 systemd-journald[1237]: Time spent on flushing to /var/log/journal/7a85e3a24e404275b66ccda0f29f1aa7 is 143.961ms for 1212 entries. Jan 20 06:36:40.686710 systemd-journald[1237]: System Journal (/var/log/journal/7a85e3a24e404275b66ccda0f29f1aa7) is 8M, max 163.5M, 155.5M free. Jan 20 06:36:40.861016 systemd-journald[1237]: Received client request to flush runtime journal. Jan 20 06:36:40.861066 kernel: loop1: detected capacity change from 0 to 111560 Jan 20 06:36:40.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.689049 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 06:36:40.745330 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 06:36:40.769704 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 06:36:40.790784 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 06:36:40.873862 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 06:36:40.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.922312 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 06:36:40.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.951736 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:36:40.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:40.973890 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 20 06:36:40.974409 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 20 06:36:40.992887 kernel: loop2: detected capacity change from 0 to 50784 Jan 20 06:36:41.002363 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 06:36:41.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:41.022939 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 06:36:41.098361 kernel: loop3: detected capacity change from 0 to 224512 Jan 20 06:36:41.105109 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 06:36:41.181654 kernel: loop4: detected capacity change from 0 to 111560 Jan 20 06:36:41.188250 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 06:36:41.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:41.205000 audit: BPF prog-id=18 op=LOAD Jan 20 06:36:41.205000 audit: BPF prog-id=19 op=LOAD Jan 20 06:36:41.205000 audit: BPF prog-id=20 op=LOAD Jan 20 06:36:41.208375 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 06:36:41.224000 audit: BPF prog-id=21 op=LOAD Jan 20 06:36:41.227726 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 06:36:41.245699 kernel: loop5: detected capacity change from 0 to 50784 Jan 20 06:36:41.255850 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 06:36:41.296055 kernel: loop6: detected capacity change from 0 to 224512 Jan 20 06:36:41.296291 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 20 06:36:41.296328 kernel: audit: type=1334 audit(1768891001.272:141): prog-id=22 op=LOAD Jan 20 06:36:41.272000 audit: BPF prog-id=22 op=LOAD Jan 20 06:36:41.281888 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 06:36:41.272000 audit: BPF prog-id=23 op=LOAD Jan 20 06:36:41.308713 kernel: audit: type=1334 audit(1768891001.272:142): prog-id=23 op=LOAD Jan 20 06:36:41.327859 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 06:36:41.346425 kernel: audit: type=1334 audit(1768891001.272:143): prog-id=24 op=LOAD Jan 20 06:36:41.272000 audit: BPF prog-id=24 op=LOAD Jan 20 06:36:41.321000 audit: BPF prog-id=25 op=LOAD Jan 20 06:36:41.367245 kernel: audit: type=1334 audit(1768891001.321:144): prog-id=25 op=LOAD Jan 20 06:36:41.367351 kernel: audit: type=1334 audit(1768891001.321:145): prog-id=26 op=LOAD Jan 20 06:36:41.321000 audit: BPF prog-id=26 op=LOAD Jan 20 06:36:41.377627 kernel: audit: type=1334 audit(1768891001.321:146): prog-id=27 op=LOAD Jan 20 06:36:41.321000 audit: BPF prog-id=27 op=LOAD Jan 20 06:36:41.393250 (sd-merge)[1294]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 20 06:36:41.472792 (sd-merge)[1294]: Merged extensions into '/usr'. Jan 20 06:36:41.482854 systemd[1]: Reload requested from client PID 1273 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 06:36:41.482899 systemd[1]: Reloading... Jan 20 06:36:41.641102 systemd-nsresourced[1299]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 06:36:41.727116 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 20 06:36:41.727133 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 20 06:36:41.969703 zram_generator::config[1341]: No configuration found. Jan 20 06:36:42.508066 systemd-oomd[1296]: No swap; memory pressure usage will be degraded Jan 20 06:36:42.859763 systemd-resolved[1297]: Positive Trust Anchors: Jan 20 06:36:42.859860 systemd-resolved[1297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 06:36:42.859865 systemd-resolved[1297]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 06:36:42.859894 systemd-resolved[1297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 06:36:43.158784 systemd-resolved[1297]: Defaulting to hostname 'linux'. Jan 20 06:36:43.339391 systemd[1]: Reloading finished in 1855 ms. Jan 20 06:36:43.366944 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 06:36:43.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:43.382833 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 06:36:43.429980 kernel: audit: type=1130 audit(1768891003.381:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:43.430402 kernel: audit: type=1130 audit(1768891003.429:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:43.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:43.432054 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 06:36:43.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:43.475332 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 06:36:43.501748 kernel: audit: type=1130 audit(1768891003.473:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:43.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:43.515964 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 06:36:43.558301 kernel: audit: type=1130 audit(1768891003.513:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:43.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:43.561368 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:36:43.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:43.597259 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:36:43.630816 systemd[1]: Starting ensure-sysext.service... Jan 20 06:36:43.642311 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 06:36:43.654000 audit: BPF prog-id=28 op=LOAD Jan 20 06:36:43.655000 audit: BPF prog-id=22 op=UNLOAD Jan 20 06:36:43.655000 audit: BPF prog-id=29 op=LOAD Jan 20 06:36:43.657000 audit: BPF prog-id=30 op=LOAD Jan 20 06:36:43.657000 audit: BPF prog-id=23 op=UNLOAD Jan 20 06:36:43.657000 audit: BPF prog-id=24 op=UNLOAD Jan 20 06:36:43.661000 audit: BPF prog-id=31 op=LOAD Jan 20 06:36:43.661000 audit: BPF prog-id=15 op=UNLOAD Jan 20 06:36:43.662000 audit: BPF prog-id=32 op=LOAD Jan 20 06:36:43.663000 audit: BPF prog-id=33 op=LOAD Jan 20 06:36:43.663000 audit: BPF prog-id=16 op=UNLOAD Jan 20 06:36:43.663000 audit: BPF prog-id=17 op=UNLOAD Jan 20 06:36:43.664000 audit: BPF prog-id=34 op=LOAD Jan 20 06:36:43.664000 audit: BPF prog-id=25 op=UNLOAD Jan 20 06:36:43.665000 audit: BPF prog-id=35 op=LOAD Jan 20 06:36:43.666000 audit: BPF prog-id=36 op=LOAD Jan 20 06:36:43.666000 audit: BPF prog-id=26 op=UNLOAD Jan 20 06:36:43.666000 audit: BPF prog-id=27 op=UNLOAD Jan 20 06:36:43.668000 audit: BPF prog-id=37 op=LOAD Jan 20 06:36:43.669000 audit: BPF prog-id=18 op=UNLOAD Jan 20 06:36:43.669000 audit: BPF prog-id=38 op=LOAD Jan 20 06:36:43.669000 audit: BPF prog-id=39 op=LOAD Jan 20 06:36:43.669000 audit: BPF prog-id=19 op=UNLOAD Jan 20 06:36:43.669000 audit: BPF prog-id=20 op=UNLOAD Jan 20 06:36:43.673000 audit: BPF prog-id=40 op=LOAD Jan 20 06:36:43.673000 audit: BPF prog-id=21 op=UNLOAD Jan 20 06:36:43.690982 systemd[1]: Reload requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Jan 20 06:36:43.691307 systemd[1]: Reloading... Jan 20 06:36:43.715088 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 06:36:43.715150 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 06:36:43.717732 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 06:36:43.721771 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Jan 20 06:36:43.721877 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Jan 20 06:36:43.736890 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 06:36:43.736990 systemd-tmpfiles[1382]: Skipping /boot Jan 20 06:36:43.763928 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 06:36:43.763951 systemd-tmpfiles[1382]: Skipping /boot Jan 20 06:36:43.846821 zram_generator::config[1411]: No configuration found. Jan 20 06:36:44.146056 systemd[1]: Reloading finished in 454 ms. Jan 20 06:36:44.185828 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 06:36:44.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:44.206000 audit: BPF prog-id=41 op=LOAD Jan 20 06:36:44.206000 audit: BPF prog-id=40 op=UNLOAD Jan 20 06:36:44.209000 audit: BPF prog-id=42 op=LOAD Jan 20 06:36:44.209000 audit: BPF prog-id=34 op=UNLOAD Jan 20 06:36:44.209000 audit: BPF prog-id=43 op=LOAD Jan 20 06:36:44.209000 audit: BPF prog-id=44 op=LOAD Jan 20 06:36:44.209000 audit: BPF prog-id=35 op=UNLOAD Jan 20 06:36:44.209000 audit: BPF prog-id=36 op=UNLOAD Jan 20 06:36:44.210000 audit: BPF prog-id=45 op=LOAD Jan 20 06:36:44.210000 audit: BPF prog-id=28 op=UNLOAD Jan 20 06:36:44.210000 audit: BPF prog-id=46 op=LOAD Jan 20 06:36:44.210000 audit: BPF prog-id=47 op=LOAD Jan 20 06:36:44.210000 audit: BPF prog-id=29 op=UNLOAD Jan 20 06:36:44.210000 audit: BPF prog-id=30 op=UNLOAD Jan 20 06:36:44.211000 audit: BPF prog-id=48 op=LOAD Jan 20 06:36:44.211000 audit: BPF prog-id=31 op=UNLOAD Jan 20 06:36:44.212000 audit: BPF prog-id=49 op=LOAD Jan 20 06:36:44.212000 audit: BPF prog-id=50 op=LOAD Jan 20 06:36:44.212000 audit: BPF prog-id=32 op=UNLOAD Jan 20 06:36:44.212000 audit: BPF prog-id=33 op=UNLOAD Jan 20 06:36:44.214000 audit: BPF prog-id=51 op=LOAD Jan 20 06:36:44.214000 audit: BPF prog-id=37 op=UNLOAD Jan 20 06:36:44.214000 audit: BPF prog-id=52 op=LOAD Jan 20 06:36:44.214000 audit: BPF prog-id=53 op=LOAD Jan 20 06:36:44.214000 audit: BPF prog-id=38 op=UNLOAD Jan 20 06:36:44.214000 audit: BPF prog-id=39 op=UNLOAD Jan 20 06:36:44.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:44.237445 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:36:44.279696 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 06:36:44.309798 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 06:36:44.326629 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 06:36:44.347744 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 06:36:44.360000 audit: BPF prog-id=8 op=UNLOAD Jan 20 06:36:44.360000 audit: BPF prog-id=7 op=UNLOAD Jan 20 06:36:44.361000 audit: BPF prog-id=54 op=LOAD Jan 20 06:36:44.362000 audit: BPF prog-id=55 op=LOAD Jan 20 06:36:44.375340 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:36:44.392089 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 06:36:44.411863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:36:44.412109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:36:44.415112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:36:44.430432 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:36:44.448066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:36:44.459881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:36:44.460429 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:36:44.461809 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:36:44.461942 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:36:44.464000 audit[1464]: SYSTEM_BOOT pid=1464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 06:36:44.470029 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 06:36:44.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:44.493985 systemd-udevd[1458]: Using default interface naming scheme 'v257'. Jan 20 06:36:44.494432 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:36:44.495037 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:36:44.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:44.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:44.524091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:36:44.524684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:36:44.532979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:36:44.546023 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:36:44.547031 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:36:44.547345 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:36:44.548888 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:36:44.555653 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:36:44.556274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:36:44.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:44.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:44.572871 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:36:44.573363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:36:44.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:44.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:44.591375 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:36:44.592036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:36:44.603000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 06:36:44.603000 audit[1484]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc262a5d90 a2=420 a3=0 items=0 ppid=1453 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:36:44.603000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:36:44.605452 augenrules[1484]: No rules Jan 20 06:36:44.608363 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 06:36:44.609278 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 06:36:44.628117 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 06:36:44.646123 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:36:44.688297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:36:44.692756 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 06:36:44.703875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:36:44.708153 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:36:44.729100 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 06:36:44.743377 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:36:44.759079 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:36:44.770896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:36:44.771294 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:36:44.771692 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:36:44.779864 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 06:36:44.794850 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:36:44.800988 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 06:36:44.819942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:36:44.821812 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:36:44.838975 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 06:36:44.839804 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 06:36:44.854432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:36:44.855059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:36:44.876830 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:36:44.877328 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:36:44.899151 systemd[1]: Finished ensure-sysext.service. Jan 20 06:36:44.917939 augenrules[1513]: /sbin/augenrules: No change Jan 20 06:36:44.942065 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 06:36:44.954000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 06:36:44.954000 audit[1547]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec96ec570 a2=420 a3=0 items=0 ppid=1513 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:36:44.954000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:36:44.955000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 06:36:44.955000 audit[1547]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec96eea00 a2=420 a3=0 items=0 ppid=1513 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:36:44.955000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:36:44.956747 augenrules[1547]: No rules Jan 20 06:36:44.961448 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 06:36:44.962849 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 06:36:45.027921 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 06:36:45.051633 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 06:36:45.056035 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 06:36:45.070064 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 06:36:45.070321 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 06:36:45.073380 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 06:36:45.089647 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 06:36:45.108682 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 06:36:45.127725 kernel: ACPI: button: Power Button [PWRF] Jan 20 06:36:45.122789 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 06:36:45.164011 systemd-networkd[1524]: lo: Link UP Jan 20 06:36:45.164850 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 06:36:45.164021 systemd-networkd[1524]: lo: Gained carrier Jan 20 06:36:45.171388 systemd-networkd[1524]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:36:45.171401 systemd-networkd[1524]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 06:36:45.176895 systemd-networkd[1524]: eth0: Link UP Jan 20 06:36:45.179074 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 06:36:45.178912 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 06:36:45.180132 systemd-networkd[1524]: eth0: Gained carrier Jan 20 06:36:45.180267 systemd-networkd[1524]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:36:45.205287 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 06:36:45.205049 systemd[1]: Reached target network.target - Network. Jan 20 06:36:45.222916 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 06:36:45.244071 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 06:36:45.274969 systemd-networkd[1524]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 06:36:45.363892 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 06:36:45.390148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:36:45.437067 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 06:36:45.458423 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 06:36:46.002522 systemd-timesyncd[1558]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 06:36:46.002672 systemd-timesyncd[1558]: Initial clock synchronization to Tue 2026-01-20 06:36:46.002430 UTC. Jan 20 06:36:46.007688 systemd-resolved[1297]: Clock change detected. Flushing caches. Jan 20 06:36:46.020922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:36:46.021480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:36:46.045864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:36:46.440953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:36:46.447415 ldconfig[1455]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 06:36:46.474977 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 06:36:46.530992 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 06:36:46.779239 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 06:36:46.799687 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 06:36:46.817603 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 06:36:46.837863 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 06:36:46.852018 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 06:36:46.865483 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 06:36:46.878939 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 06:36:46.892716 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 06:36:46.907365 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 06:36:46.918676 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 06:36:46.932457 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 06:36:46.932575 systemd[1]: Reached target paths.target - Path Units. Jan 20 06:36:46.942441 systemd[1]: Reached target timers.target - Timer Units. Jan 20 06:36:46.960844 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 06:36:46.978669 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 06:36:47.006424 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 06:36:47.020704 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 06:36:47.034646 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 06:36:47.077475 kernel: kvm_amd: TSC scaling supported Jan 20 06:36:47.077720 kernel: kvm_amd: Nested Virtualization enabled Jan 20 06:36:47.077853 kernel: kvm_amd: Nested Paging enabled Jan 20 06:36:47.082447 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 06:36:47.087547 kernel: kvm_amd: PMU virtualization is disabled Jan 20 06:36:47.101892 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 06:36:47.121979 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 06:36:47.142392 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 06:36:47.164694 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 06:36:47.175388 systemd[1]: Reached target basic.target - Basic System. Jan 20 06:36:47.185947 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 06:36:47.187617 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 06:36:47.197709 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 06:36:47.216597 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 06:36:47.433630 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 06:36:47.465571 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 06:36:47.484864 systemd-networkd[1524]: eth0: Gained IPv6LL Jan 20 06:36:47.496529 jq[1601]: false Jan 20 06:36:47.502284 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 06:36:47.517428 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 06:36:47.520635 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 06:36:47.551725 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 06:36:47.570404 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Refreshing passwd entry cache Jan 20 06:36:47.562993 oslogin_cache_refresh[1603]: Refreshing passwd entry cache Jan 20 06:36:47.588632 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Failure getting users, quitting Jan 20 06:36:47.588632 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 06:36:47.588632 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Refreshing group entry cache Jan 20 06:36:47.587720 oslogin_cache_refresh[1603]: Failure getting users, quitting Jan 20 06:36:47.587958 oslogin_cache_refresh[1603]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 06:36:47.588601 oslogin_cache_refresh[1603]: Refreshing group entry cache Jan 20 06:36:47.608018 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Failure getting groups, quitting Jan 20 06:36:47.608018 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 06:36:47.607582 oslogin_cache_refresh[1603]: Failure getting groups, quitting Jan 20 06:36:47.607600 oslogin_cache_refresh[1603]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 06:36:47.614354 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 06:36:47.633424 extend-filesystems[1602]: Found /dev/vda6 Jan 20 06:36:47.653693 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 06:36:47.671222 extend-filesystems[1602]: Found /dev/vda9 Jan 20 06:36:47.671222 extend-filesystems[1602]: Checking size of /dev/vda9 Jan 20 06:36:47.720994 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 06:36:47.722636 extend-filesystems[1602]: Resized partition /dev/vda9 Jan 20 06:36:47.759955 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 20 06:36:47.755472 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 06:36:47.760247 extend-filesystems[1622]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 06:36:47.770742 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 06:36:47.771862 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 06:36:47.775395 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 06:36:47.802451 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 06:36:47.823906 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 06:36:47.844450 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 06:36:47.859448 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 06:36:47.861265 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 06:36:47.861922 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 06:36:47.867358 update_engine[1624]: I20260120 06:36:47.864408 1624 main.cc:92] Flatcar Update Engine starting Jan 20 06:36:47.862944 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 06:36:47.888247 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 20 06:36:47.886878 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 06:36:47.929988 jq[1625]: true Jan 20 06:36:47.888897 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 06:36:47.909211 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 06:36:47.909632 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 06:36:47.956686 jq[1637]: true Jan 20 06:36:47.957501 extend-filesystems[1622]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 06:36:47.957501 extend-filesystems[1622]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 06:36:47.957501 extend-filesystems[1622]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 20 06:36:47.973012 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 06:36:48.026557 extend-filesystems[1602]: Resized filesystem in /dev/vda9 Jan 20 06:36:47.974249 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 06:36:48.041690 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 06:36:48.045551 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 06:36:48.067409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:36:48.088228 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 06:36:48.149510 kernel: EDAC MC: Ver: 3.0.0 Jan 20 06:36:48.149589 tar[1635]: linux-amd64/LICENSE Jan 20 06:36:48.151577 tar[1635]: linux-amd64/helm Jan 20 06:36:48.187528 systemd-logind[1623]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 06:36:48.187663 systemd-logind[1623]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 06:36:48.190496 systemd-logind[1623]: New seat seat0. Jan 20 06:36:48.194002 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 06:36:48.238926 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 06:36:48.239571 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 06:36:48.251429 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 06:36:48.276964 dbus-daemon[1599]: [system] SELinux support is enabled Jan 20 06:36:48.277487 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 06:36:48.291508 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 06:36:48.291535 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 06:36:48.300962 bash[1680]: Updated "/home/core/.ssh/authorized_keys" Jan 20 06:36:48.305015 update_engine[1624]: I20260120 06:36:48.304453 1624 update_check_scheduler.cc:74] Next update check in 2m52s Jan 20 06:36:48.306604 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 06:36:48.306721 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 06:36:48.324197 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 06:36:48.343560 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 06:36:48.362011 systemd[1]: Started update-engine.service - Update Engine. Jan 20 06:36:48.362371 dbus-daemon[1599]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 06:36:48.379630 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 06:36:48.382748 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 06:36:48.517748 locksmithd[1694]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 06:36:48.553741 containerd[1645]: time="2026-01-20T06:36:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 06:36:48.558998 containerd[1645]: time="2026-01-20T06:36:48.558603741Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 06:36:48.606890 containerd[1645]: time="2026-01-20T06:36:48.606711862Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.135µs" Jan 20 06:36:48.607397 containerd[1645]: time="2026-01-20T06:36:48.607365301Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 06:36:48.607572 containerd[1645]: time="2026-01-20T06:36:48.607545538Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 06:36:48.607677 containerd[1645]: time="2026-01-20T06:36:48.607655433Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 06:36:48.608342 containerd[1645]: time="2026-01-20T06:36:48.608316847Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 06:36:48.608441 containerd[1645]: time="2026-01-20T06:36:48.608420250Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 06:36:48.608748 containerd[1645]: time="2026-01-20T06:36:48.608720160Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 06:36:48.610241 containerd[1645]: time="2026-01-20T06:36:48.608948817Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 06:36:48.610241 containerd[1645]: time="2026-01-20T06:36:48.609576378Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 06:36:48.610241 containerd[1645]: time="2026-01-20T06:36:48.609600593Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 06:36:48.610241 containerd[1645]: time="2026-01-20T06:36:48.609617625Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 06:36:48.610241 containerd[1645]: time="2026-01-20T06:36:48.609631330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 06:36:48.610541 containerd[1645]: time="2026-01-20T06:36:48.610519138Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 06:36:48.610617 containerd[1645]: time="2026-01-20T06:36:48.610599538Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 06:36:48.610913 containerd[1645]: time="2026-01-20T06:36:48.610890301Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 06:36:48.611509 containerd[1645]: time="2026-01-20T06:36:48.611485492Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 06:36:48.611621 containerd[1645]: time="2026-01-20T06:36:48.611599975Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 06:36:48.611697 containerd[1645]: time="2026-01-20T06:36:48.611678301Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 06:36:48.612251 containerd[1645]: time="2026-01-20T06:36:48.612009690Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 06:36:48.613324 containerd[1645]: time="2026-01-20T06:36:48.613296141Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 06:36:48.613612 containerd[1645]: time="2026-01-20T06:36:48.613586133Z" level=info msg="metadata content store policy set" policy=shared Jan 20 06:36:48.627398 containerd[1645]: time="2026-01-20T06:36:48.627367937Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 06:36:48.628350 containerd[1645]: time="2026-01-20T06:36:48.628323801Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 06:36:48.629632 containerd[1645]: time="2026-01-20T06:36:48.629595495Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 06:36:48.629719 containerd[1645]: time="2026-01-20T06:36:48.629698827Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 06:36:48.629921 containerd[1645]: time="2026-01-20T06:36:48.629899983Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 06:36:48.630022 containerd[1645]: time="2026-01-20T06:36:48.630002264Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 06:36:48.630328 containerd[1645]: time="2026-01-20T06:36:48.630307183Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 06:36:48.630401 containerd[1645]: time="2026-01-20T06:36:48.630383906Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 06:36:48.633428 containerd[1645]: time="2026-01-20T06:36:48.633402710Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 06:36:48.633521 containerd[1645]: time="2026-01-20T06:36:48.633500103Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 06:36:48.633607 containerd[1645]: time="2026-01-20T06:36:48.633587426Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 06:36:48.633706 containerd[1645]: time="2026-01-20T06:36:48.633684426Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 06:36:48.633908 containerd[1645]: time="2026-01-20T06:36:48.633884099Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 06:36:48.633990 containerd[1645]: time="2026-01-20T06:36:48.633972524Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 06:36:48.634453 containerd[1645]: time="2026-01-20T06:36:48.634430218Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 06:36:48.634549 containerd[1645]: time="2026-01-20T06:36:48.634531147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 06:36:48.634631 containerd[1645]: time="2026-01-20T06:36:48.634611196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635500296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635527056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635541422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635557763Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635576999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635592057Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635609810Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635625259Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635655446Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635700960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635716499Z" level=info msg="Start snapshots syncer" Jan 20 06:36:48.636251 containerd[1645]: time="2026-01-20T06:36:48.635873613Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 06:36:48.637369 containerd[1645]: time="2026-01-20T06:36:48.637011236Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 06:36:48.638289 containerd[1645]: time="2026-01-20T06:36:48.638263423Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 06:36:48.642683 containerd[1645]: time="2026-01-20T06:36:48.642655922Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 06:36:48.643427 containerd[1645]: time="2026-01-20T06:36:48.643004512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 06:36:48.643543 containerd[1645]: time="2026-01-20T06:36:48.643519003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 06:36:48.643635 containerd[1645]: time="2026-01-20T06:36:48.643604382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 06:36:48.643721 containerd[1645]: time="2026-01-20T06:36:48.643701083Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 06:36:48.643924 containerd[1645]: time="2026-01-20T06:36:48.643901938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 06:36:48.644007 containerd[1645]: time="2026-01-20T06:36:48.643988199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 06:36:48.644313 containerd[1645]: time="2026-01-20T06:36:48.644292627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 06:36:48.644391 containerd[1645]: time="2026-01-20T06:36:48.644373017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 06:36:48.644491 containerd[1645]: time="2026-01-20T06:36:48.644469046Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 06:36:48.644605 containerd[1645]: time="2026-01-20T06:36:48.644583249Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 06:36:48.644688 containerd[1645]: time="2026-01-20T06:36:48.644667587Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 06:36:48.645431 containerd[1645]: time="2026-01-20T06:36:48.644740343Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 06:36:48.645431 containerd[1645]: time="2026-01-20T06:36:48.644877699Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 06:36:48.645431 containerd[1645]: time="2026-01-20T06:36:48.644903517Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 06:36:48.645431 containerd[1645]: time="2026-01-20T06:36:48.644919547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 06:36:48.645431 containerd[1645]: time="2026-01-20T06:36:48.644934635Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 06:36:48.645431 containerd[1645]: time="2026-01-20T06:36:48.644951777Z" level=info msg="runtime interface created" Jan 20 06:36:48.645431 containerd[1645]: time="2026-01-20T06:36:48.644959481Z" level=info msg="created NRI interface" Jan 20 06:36:48.645431 containerd[1645]: time="2026-01-20T06:36:48.644972807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 06:36:48.645431 containerd[1645]: time="2026-01-20T06:36:48.644989077Z" level=info msg="Connect containerd service" Jan 20 06:36:48.645431 containerd[1645]: time="2026-01-20T06:36:48.645015677Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 06:36:48.654434 containerd[1645]: time="2026-01-20T06:36:48.654400143Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 06:36:48.673451 sshd_keygen[1650]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 06:36:48.763301 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 06:36:48.780290 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 06:36:48.848399 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 06:36:48.848905 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 06:36:48.869589 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 06:36:48.913683 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 06:36:48.929513 tar[1635]: linux-amd64/README.md Jan 20 06:36:48.938743 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 06:36:48.951448 containerd[1645]: time="2026-01-20T06:36:48.947900034Z" level=info msg="Start subscribing containerd event" Jan 20 06:36:48.951448 containerd[1645]: time="2026-01-20T06:36:48.947959025Z" level=info msg="Start recovering state" Jan 20 06:36:48.951448 containerd[1645]: time="2026-01-20T06:36:48.948277709Z" level=info msg="Start event monitor" Jan 20 06:36:48.951448 containerd[1645]: time="2026-01-20T06:36:48.948296274Z" level=info msg="Start cni network conf syncer for default" Jan 20 06:36:48.951448 containerd[1645]: time="2026-01-20T06:36:48.948309038Z" level=info msg="Start streaming server" Jan 20 06:36:48.951448 containerd[1645]: time="2026-01-20T06:36:48.948320058Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 06:36:48.951448 containerd[1645]: time="2026-01-20T06:36:48.948331459Z" level=info msg="runtime interface starting up..." Jan 20 06:36:48.951448 containerd[1645]: time="2026-01-20T06:36:48.948339073Z" level=info msg="starting plugins..." Jan 20 06:36:48.951448 containerd[1645]: time="2026-01-20T06:36:48.948354412Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 06:36:48.951448 containerd[1645]: time="2026-01-20T06:36:48.950441528Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 06:36:48.951448 containerd[1645]: time="2026-01-20T06:36:48.950497683Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 06:36:48.952632 containerd[1645]: time="2026-01-20T06:36:48.952428046Z" level=info msg="containerd successfully booted in 0.399419s" Jan 20 06:36:48.955372 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 06:36:48.969399 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 06:36:48.980502 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 06:36:49.020988 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 06:36:49.985278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:36:50.002464 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 06:36:50.017689 systemd[1]: Startup finished in 8.097s (kernel) + 30.105s (initrd) + 13.483s (userspace) = 51.686s. Jan 20 06:36:50.018458 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:36:51.218323 kubelet[1739]: E0120 06:36:51.217765 1739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:36:51.224483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:36:51.224974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:36:51.225988 systemd[1]: kubelet.service: Consumed 1.518s CPU time, 267.7M memory peak. Jan 20 06:36:56.217643 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 06:36:56.219973 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:36564.service - OpenSSH per-connection server daemon (10.0.0.1:36564). Jan 20 06:36:56.415017 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 36564 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:36:56.420972 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:36:56.438331 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 06:36:56.440299 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 06:36:56.453537 systemd-logind[1623]: New session 1 of user core. Jan 20 06:36:56.492722 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 06:36:56.499367 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 06:36:56.536443 (systemd)[1758]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:36:56.546579 systemd-logind[1623]: New session 2 of user core. Jan 20 06:36:56.758363 systemd[1758]: Queued start job for default target default.target. Jan 20 06:36:56.785291 systemd[1758]: Created slice app.slice - User Application Slice. Jan 20 06:36:56.785415 systemd[1758]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 06:36:56.785430 systemd[1758]: Reached target paths.target - Paths. Jan 20 06:36:56.785573 systemd[1758]: Reached target timers.target - Timers. Jan 20 06:36:56.788446 systemd[1758]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 06:36:56.790515 systemd[1758]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 06:36:56.813484 systemd[1758]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 06:36:56.813579 systemd[1758]: Reached target sockets.target - Sockets. Jan 20 06:36:56.815539 systemd[1758]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 06:36:56.815740 systemd[1758]: Reached target basic.target - Basic System. Jan 20 06:36:56.815957 systemd[1758]: Reached target default.target - Main User Target. Jan 20 06:36:56.816284 systemd[1758]: Startup finished in 254ms. Jan 20 06:36:56.816451 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 06:36:56.830714 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 06:36:56.866944 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:36572.service - OpenSSH per-connection server daemon (10.0.0.1:36572). Jan 20 06:36:56.981963 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 36572 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:36:56.985999 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:36:57.002951 systemd-logind[1623]: New session 3 of user core. Jan 20 06:36:57.021414 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 06:36:57.057795 sshd[1776]: Connection closed by 10.0.0.1 port 36572 Jan 20 06:36:57.059516 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Jan 20 06:36:57.070625 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:36572.service: Deactivated successfully. Jan 20 06:36:57.074462 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 06:36:57.076557 systemd-logind[1623]: Session 3 logged out. Waiting for processes to exit. Jan 20 06:36:57.082786 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:36582.service - OpenSSH per-connection server daemon (10.0.0.1:36582). Jan 20 06:36:57.084321 systemd-logind[1623]: Removed session 3. Jan 20 06:36:57.206555 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 36582 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:36:57.209731 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:36:57.223712 systemd-logind[1623]: New session 4 of user core. Jan 20 06:36:57.241690 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 06:36:57.268373 sshd[1786]: Connection closed by 10.0.0.1 port 36582 Jan 20 06:36:57.268831 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Jan 20 06:36:57.285392 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:36582.service: Deactivated successfully. Jan 20 06:36:57.289569 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 06:36:57.293353 systemd-logind[1623]: Session 4 logged out. Waiting for processes to exit. Jan 20 06:36:57.298745 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:36598.service - OpenSSH per-connection server daemon (10.0.0.1:36598). Jan 20 06:36:57.300388 systemd-logind[1623]: Removed session 4. Jan 20 06:36:57.413994 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 36598 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:36:57.417727 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:36:57.429731 systemd-logind[1623]: New session 5 of user core. Jan 20 06:36:57.442522 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 06:36:57.479710 sshd[1796]: Connection closed by 10.0.0.1 port 36598 Jan 20 06:36:57.480943 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Jan 20 06:36:57.491622 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:36598.service: Deactivated successfully. Jan 20 06:36:57.495685 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 06:36:57.500158 systemd-logind[1623]: Session 5 logged out. Waiting for processes to exit. Jan 20 06:36:57.504478 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:36606.service - OpenSSH per-connection server daemon (10.0.0.1:36606). Jan 20 06:36:57.505645 systemd-logind[1623]: Removed session 5. Jan 20 06:36:57.614773 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 36606 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:36:57.616949 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:36:57.630996 systemd-logind[1623]: New session 6 of user core. Jan 20 06:36:57.667680 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 06:36:57.751527 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 06:36:57.752481 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:36:57.782777 sudo[1807]: pam_unix(sudo:session): session closed for user root Jan 20 06:36:57.786989 sshd[1806]: Connection closed by 10.0.0.1 port 36606 Jan 20 06:36:57.790496 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Jan 20 06:36:57.818599 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:36606.service: Deactivated successfully. Jan 20 06:36:57.822994 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 06:36:57.827644 systemd-logind[1623]: Session 6 logged out. Waiting for processes to exit. Jan 20 06:36:57.832576 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:36614.service - OpenSSH per-connection server daemon (10.0.0.1:36614). Jan 20 06:36:57.836318 systemd-logind[1623]: Removed session 6. Jan 20 06:36:57.945583 sshd[1814]: Accepted publickey for core from 10.0.0.1 port 36614 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:36:57.948780 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:36:57.961796 systemd-logind[1623]: New session 7 of user core. Jan 20 06:36:57.969631 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 06:36:58.017560 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 06:36:58.018373 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:36:58.031310 sudo[1820]: pam_unix(sudo:session): session closed for user root Jan 20 06:36:58.052992 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 06:36:58.053777 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:36:58.071729 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 06:36:58.202000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 06:36:58.205600 augenrules[1844]: No rules Jan 20 06:36:58.208464 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 06:36:58.209376 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 06:36:58.211834 sudo[1819]: pam_unix(sudo:session): session closed for user root Jan 20 06:36:58.212498 kernel: kauditd_printk_skb: 77 callbacks suppressed Jan 20 06:36:58.212566 kernel: audit: type=1305 audit(1768891018.202:222): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 06:36:58.216207 sshd[1818]: Connection closed by 10.0.0.1 port 36614 Jan 20 06:36:58.215545 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Jan 20 06:36:58.202000 audit[1844]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd40cee200 a2=420 a3=0 items=0 ppid=1825 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:36:58.275399 kernel: audit: type=1300 audit(1768891018.202:222): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd40cee200 a2=420 a3=0 items=0 ppid=1825 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:36:58.276505 kernel: audit: type=1327 audit(1768891018.202:222): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:36:58.202000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:36:58.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.325382 kernel: audit: type=1130 audit(1768891018.209:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.325434 kernel: audit: type=1131 audit(1768891018.209:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.356458 kernel: audit: type=1106 audit(1768891018.210:225): pid=1819 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.210000 audit[1819]: USER_END pid=1819 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.211000 audit[1819]: CRED_DISP pid=1819 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.421422 kernel: audit: type=1104 audit(1768891018.211:226): pid=1819 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.421486 kernel: audit: type=1106 audit(1768891018.217:227): pid=1814 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:36:58.217000 audit[1814]: USER_END pid=1814 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:36:58.429697 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:36614.service: Deactivated successfully. Jan 20 06:36:58.433521 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 06:36:58.435792 systemd-logind[1623]: Session 7 logged out. Waiting for processes to exit. Jan 20 06:36:58.440999 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:36620.service - OpenSSH per-connection server daemon (10.0.0.1:36620). Jan 20 06:36:58.443562 systemd-logind[1623]: Removed session 7. Jan 20 06:36:58.217000 audit[1814]: CRED_DISP pid=1814 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:36:58.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.35:22-10.0.0.1:36614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.35:22-10.0.0.1:36620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.500287 kernel: audit: type=1104 audit(1768891018.217:228): pid=1814 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:36:58.500378 kernel: audit: type=1131 audit(1768891018.429:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.35:22-10.0.0.1:36614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.620000 audit[1853]: USER_ACCT pid=1853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:36:58.622573 sshd[1853]: Accepted publickey for core from 10.0.0.1 port 36620 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:36:58.623000 audit[1853]: CRED_ACQ pid=1853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:36:58.623000 audit[1853]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd900183a0 a2=3 a3=0 items=0 ppid=1 pid=1853 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:36:58.623000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:36:58.625694 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:36:58.639482 systemd-logind[1623]: New session 8 of user core. Jan 20 06:36:58.649569 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 06:36:58.655000 audit[1853]: USER_START pid=1853 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:36:58.659000 audit[1857]: CRED_ACQ pid=1857 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:36:58.690000 audit[1858]: USER_ACCT pid=1858 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.692312 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 06:36:58.691000 audit[1858]: CRED_REFR pid=1858 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:36:58.693005 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:36:58.692000 audit[1858]: USER_START pid=1858 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:36:59.480273 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 06:36:59.514736 (dockerd)[1879]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 06:37:00.138725 dockerd[1879]: time="2026-01-20T06:37:00.137626804Z" level=info msg="Starting up" Jan 20 06:37:00.141013 dockerd[1879]: time="2026-01-20T06:37:00.140594811Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 06:37:00.191349 dockerd[1879]: time="2026-01-20T06:37:00.190718692Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 06:37:00.320712 dockerd[1879]: time="2026-01-20T06:37:00.320565808Z" level=info msg="Loading containers: start." Jan 20 06:37:00.355678 kernel: Initializing XFRM netlink socket Jan 20 06:37:00.648000 audit[1932]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.648000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdfb868720 a2=0 a3=0 items=0 ppid=1879 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.648000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 06:37:00.664000 audit[1934]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.664000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd90b43f20 a2=0 a3=0 items=0 ppid=1879 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 06:37:00.679000 audit[1936]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.679000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9eac2890 a2=0 a3=0 items=0 ppid=1879 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.679000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 06:37:00.697000 audit[1938]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.697000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc46509510 a2=0 a3=0 items=0 ppid=1879 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.697000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 06:37:00.711000 audit[1940]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.711000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffbe2ab290 a2=0 a3=0 items=0 ppid=1879 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 06:37:00.726000 audit[1942]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.726000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe2f50bea0 a2=0 a3=0 items=0 ppid=1879 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.726000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:37:00.742000 audit[1944]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.742000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc72970a30 a2=0 a3=0 items=0 ppid=1879 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 06:37:00.756000 audit[1946]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.756000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc63a08590 a2=0 a3=0 items=0 ppid=1879 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 06:37:00.846000 audit[1949]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.846000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffc77b6210 a2=0 a3=0 items=0 ppid=1879 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.846000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 06:37:00.860000 audit[1951]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.860000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc9f288da0 a2=0 a3=0 items=0 ppid=1879 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.860000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 06:37:00.878000 audit[1953]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.878000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffda7715a0 a2=0 a3=0 items=0 ppid=1879 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.878000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 06:37:00.896000 audit[1955]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.896000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff6fe7d930 a2=0 a3=0 items=0 ppid=1879 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.896000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:37:00.913000 audit[1957]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:00.913000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff55967530 a2=0 a3=0 items=0 ppid=1879 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:00.913000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 06:37:01.164000 audit[1987]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.164000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc99e26cf0 a2=0 a3=0 items=0 ppid=1879 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 06:37:01.178000 audit[1989]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.178000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff3543b6e0 a2=0 a3=0 items=0 ppid=1879 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 06:37:01.193000 audit[1991]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.193000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0faf2430 a2=0 a3=0 items=0 ppid=1879 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 06:37:01.208000 audit[1993]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.208000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedabd7800 a2=0 a3=0 items=0 ppid=1879 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.208000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 06:37:01.223000 audit[1995]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.223000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce021df40 a2=0 a3=0 items=0 ppid=1879 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 06:37:01.241000 audit[1997]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.241000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffded1b490 a2=0 a3=0 items=0 ppid=1879 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:37:01.257000 audit[1999]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.257000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff438fdb90 a2=0 a3=0 items=0 ppid=1879 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 06:37:01.277000 audit[2001]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.277000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffd3ac8d70 a2=0 a3=0 items=0 ppid=1879 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.277000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 06:37:01.295988 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 06:37:01.294000 audit[2003]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.294000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe19e84240 a2=0 a3=0 items=0 ppid=1879 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.294000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 06:37:01.300498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:37:01.313000 audit[2006]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.313000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe162b5950 a2=0 a3=0 items=0 ppid=1879 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.313000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 06:37:01.332000 audit[2008]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.332000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffdc01295b0 a2=0 a3=0 items=0 ppid=1879 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.332000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 06:37:01.348000 audit[2012]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.348000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc85ac2da0 a2=0 a3=0 items=0 ppid=1879 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.348000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:37:01.363000 audit[2014]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.363000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffffbd8ea00 a2=0 a3=0 items=0 ppid=1879 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 06:37:01.405000 audit[2019]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:01.405000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe487e5050 a2=0 a3=0 items=0 ppid=1879 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.405000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 06:37:01.424000 audit[2021]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:01.424000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc504a0480 a2=0 a3=0 items=0 ppid=1879 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.424000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 06:37:01.441000 audit[2023]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:01.441000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc15b45a40 a2=0 a3=0 items=0 ppid=1879 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.441000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 06:37:01.457000 audit[2025]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.457000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdad809b50 a2=0 a3=0 items=0 ppid=1879 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.457000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 06:37:01.472000 audit[2027]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.472000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffea5e81390 a2=0 a3=0 items=0 ppid=1879 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.472000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 06:37:01.490000 audit[2029]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:01.490000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffed7e5e830 a2=0 a3=0 items=0 ppid=1879 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.490000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 06:37:01.656000 audit[2036]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:01.656000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc7b3e1710 a2=0 a3=0 items=0 ppid=1879 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.656000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 06:37:01.673000 audit[2040]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:01.673000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc2758cf00 a2=0 a3=0 items=0 ppid=1879 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.673000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 06:37:01.684612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:37:01.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:01.705849 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:37:01.743000 audit[2055]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:01.743000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffe4100fab0 a2=0 a3=0 items=0 ppid=1879 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.743000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 06:37:01.812000 audit[2062]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:01.812000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe8cb6d120 a2=0 a3=0 items=0 ppid=1879 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.812000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 06:37:01.827000 audit[2064]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:01.827000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc90798cd0 a2=0 a3=0 items=0 ppid=1879 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.827000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 06:37:01.841000 audit[2066]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:01.841000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff5ca1c200 a2=0 a3=0 items=0 ppid=1879 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.841000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 06:37:01.856000 audit[2069]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:01.856000 audit[2069]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcc7c41220 a2=0 a3=0 items=0 ppid=1879 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.856000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 06:37:01.872000 audit[2071]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:01.872000 audit[2071]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffe0006940 a2=0 a3=0 items=0 ppid=1879 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:01.872000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 06:37:01.875480 systemd-networkd[1524]: docker0: Link UP Jan 20 06:37:01.899389 dockerd[1879]: time="2026-01-20T06:37:01.898682855Z" level=info msg="Loading containers: done." Jan 20 06:37:01.933782 kubelet[2042]: E0120 06:37:01.933737 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:37:01.942405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:37:01.942685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:37:01.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:37:01.948390 systemd[1]: kubelet.service: Consumed 568ms CPU time, 110.7M memory peak. Jan 20 06:37:01.968787 dockerd[1879]: time="2026-01-20T06:37:01.968532208Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 06:37:01.968787 dockerd[1879]: time="2026-01-20T06:37:01.968703577Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 06:37:01.969751 dockerd[1879]: time="2026-01-20T06:37:01.968786813Z" level=info msg="Initializing buildkit" Jan 20 06:37:02.106399 dockerd[1879]: time="2026-01-20T06:37:02.106005531Z" level=info msg="Completed buildkit initialization" Jan 20 06:37:02.129551 dockerd[1879]: time="2026-01-20T06:37:02.129378787Z" level=info msg="Daemon has completed initialization" Jan 20 06:37:02.132509 dockerd[1879]: time="2026-01-20T06:37:02.129868801Z" level=info msg="API listen on /run/docker.sock" Jan 20 06:37:02.130714 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 06:37:02.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:03.666448 containerd[1645]: time="2026-01-20T06:37:03.666315264Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 06:37:04.789646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097547364.mount: Deactivated successfully. Jan 20 06:37:08.858689 containerd[1645]: time="2026-01-20T06:37:08.857877300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:08.863488 containerd[1645]: time="2026-01-20T06:37:08.863324567Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=28063793" Jan 20 06:37:08.869729 containerd[1645]: time="2026-01-20T06:37:08.868801172Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:08.883262 containerd[1645]: time="2026-01-20T06:37:08.882619114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:08.892544 containerd[1645]: time="2026-01-20T06:37:08.891810760Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 5.225448077s" Jan 20 06:37:08.892544 containerd[1645]: time="2026-01-20T06:37:08.891851046Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 06:37:08.896515 containerd[1645]: time="2026-01-20T06:37:08.893640918Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 06:37:12.070617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 06:37:12.083878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:37:12.925682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:37:12.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:12.935676 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 20 06:37:12.935863 kernel: audit: type=1130 audit(1768891032.925:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:13.004645 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:37:13.470389 containerd[1645]: time="2026-01-20T06:37:13.469774579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:13.474534 containerd[1645]: time="2026-01-20T06:37:13.473698073Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 20 06:37:13.479351 containerd[1645]: time="2026-01-20T06:37:13.478663922Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:13.485811 containerd[1645]: time="2026-01-20T06:37:13.485753985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:13.489316 containerd[1645]: time="2026-01-20T06:37:13.487409645Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 4.593189937s" Jan 20 06:37:13.489316 containerd[1645]: time="2026-01-20T06:37:13.487549787Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 06:37:13.494677 containerd[1645]: time="2026-01-20T06:37:13.493541981Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 06:37:13.495372 kubelet[2185]: E0120 06:37:13.495327 2185 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:37:13.499800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:37:13.500566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:37:13.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:37:13.502436 systemd[1]: kubelet.service: Consumed 1.073s CPU time, 110.7M memory peak. Jan 20 06:37:13.541272 kernel: audit: type=1131 audit(1768891033.501:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:37:18.053872 containerd[1645]: time="2026-01-20T06:37:18.053498709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:18.063859 containerd[1645]: time="2026-01-20T06:37:18.063669861Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 20 06:37:18.067604 containerd[1645]: time="2026-01-20T06:37:18.067421130Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:18.074261 containerd[1645]: time="2026-01-20T06:37:18.073944953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:18.080838 containerd[1645]: time="2026-01-20T06:37:18.079995628Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 4.586361223s" Jan 20 06:37:18.080838 containerd[1645]: time="2026-01-20T06:37:18.080801923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 06:37:18.084295 containerd[1645]: time="2026-01-20T06:37:18.083930095Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 06:37:22.658823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974929642.mount: Deactivated successfully. Jan 20 06:37:23.555333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 06:37:23.588605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:37:24.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:24.788689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:37:24.827289 kernel: audit: type=1130 audit(1768891044.788:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:24.840302 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:37:25.934005 kubelet[2215]: E0120 06:37:25.933483 2215 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:37:25.954642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:37:25.956793 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:37:25.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:37:25.960852 systemd[1]: kubelet.service: Consumed 2.041s CPU time, 109.2M memory peak. Jan 20 06:37:25.995390 kernel: audit: type=1131 audit(1768891045.959:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:37:28.406531 containerd[1645]: time="2026-01-20T06:37:28.405269380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:28.409398 containerd[1645]: time="2026-01-20T06:37:28.409181530Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 20 06:37:28.412376 containerd[1645]: time="2026-01-20T06:37:28.412337721Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:28.416495 containerd[1645]: time="2026-01-20T06:37:28.416449128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:28.417593 containerd[1645]: time="2026-01-20T06:37:28.416999802Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 10.333030373s" Jan 20 06:37:28.417593 containerd[1645]: time="2026-01-20T06:37:28.417366839Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 06:37:28.419638 containerd[1645]: time="2026-01-20T06:37:28.419426056Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 06:37:29.911428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount100304884.mount: Deactivated successfully. Jan 20 06:37:33.374014 update_engine[1624]: I20260120 06:37:33.370378 1624 update_attempter.cc:509] Updating boot flags... Jan 20 06:37:35.581513 containerd[1645]: time="2026-01-20T06:37:35.580887627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:35.583914 containerd[1645]: time="2026-01-20T06:37:35.583591155Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17654557" Jan 20 06:37:35.586709 containerd[1645]: time="2026-01-20T06:37:35.586596218Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:35.594930 containerd[1645]: time="2026-01-20T06:37:35.594219899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:35.595577 containerd[1645]: time="2026-01-20T06:37:35.595397736Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 7.175879429s" Jan 20 06:37:35.595577 containerd[1645]: time="2026-01-20T06:37:35.595496319Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 06:37:35.597265 containerd[1645]: time="2026-01-20T06:37:35.597237404Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 06:37:36.053700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 06:37:36.059249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:37:36.365559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3950736235.mount: Deactivated successfully. Jan 20 06:37:36.406688 containerd[1645]: time="2026-01-20T06:37:36.405521727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:37:36.411450 containerd[1645]: time="2026-01-20T06:37:36.410665242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 06:37:36.417481 containerd[1645]: time="2026-01-20T06:37:36.416932920Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:37:36.424955 containerd[1645]: time="2026-01-20T06:37:36.424838326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:37:36.425738 containerd[1645]: time="2026-01-20T06:37:36.425604842Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 828.166345ms" Jan 20 06:37:36.425738 containerd[1645]: time="2026-01-20T06:37:36.425657850Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 06:37:36.427242 containerd[1645]: time="2026-01-20T06:37:36.427005604Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 06:37:36.603841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:37:36.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:36.624398 kernel: audit: type=1130 audit(1768891056.604:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:36.633693 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:37:36.958534 kubelet[2306]: E0120 06:37:36.958003 2306 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:37:36.965531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:37:36.965879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:37:36.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:37:36.967879 systemd[1]: kubelet.service: Consumed 669ms CPU time, 109.3M memory peak. Jan 20 06:37:36.986292 kernel: audit: type=1131 audit(1768891056.967:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:37:37.226614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044485968.mount: Deactivated successfully. Jan 20 06:37:42.514632 containerd[1645]: time="2026-01-20T06:37:42.513991920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:42.518531 containerd[1645]: time="2026-01-20T06:37:42.515826984Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55833262" Jan 20 06:37:42.518531 containerd[1645]: time="2026-01-20T06:37:42.518214064Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:42.523378 containerd[1645]: time="2026-01-20T06:37:42.523284202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:37:42.525511 containerd[1645]: time="2026-01-20T06:37:42.525325100Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.098048924s" Jan 20 06:37:42.525511 containerd[1645]: time="2026-01-20T06:37:42.525422811Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 06:37:46.084702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:37:46.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:46.085472 systemd[1]: kubelet.service: Consumed 669ms CPU time, 109.3M memory peak. Jan 20 06:37:46.092485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:37:46.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:46.124979 kernel: audit: type=1130 audit(1768891066.084:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:46.125317 kernel: audit: type=1131 audit(1768891066.084:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:46.150174 systemd[1]: Reload requested from client PID 2400 ('systemctl') (unit session-8.scope)... Jan 20 06:37:46.150370 systemd[1]: Reloading... Jan 20 06:37:46.286236 zram_generator::config[2450]: No configuration found. Jan 20 06:37:46.686221 systemd[1]: Reloading finished in 535 ms. Jan 20 06:37:46.735000 audit: BPF prog-id=61 op=LOAD Jan 20 06:37:46.736000 audit: BPF prog-id=51 op=UNLOAD Jan 20 06:37:46.750909 kernel: audit: type=1334 audit(1768891066.735:290): prog-id=61 op=LOAD Jan 20 06:37:46.750990 kernel: audit: type=1334 audit(1768891066.736:291): prog-id=51 op=UNLOAD Jan 20 06:37:46.751012 kernel: audit: type=1334 audit(1768891066.736:292): prog-id=62 op=LOAD Jan 20 06:37:46.751164 kernel: audit: type=1334 audit(1768891066.736:293): prog-id=63 op=LOAD Jan 20 06:37:46.751192 kernel: audit: type=1334 audit(1768891066.736:294): prog-id=52 op=UNLOAD Jan 20 06:37:46.751225 kernel: audit: type=1334 audit(1768891066.736:295): prog-id=53 op=UNLOAD Jan 20 06:37:46.751238 kernel: audit: type=1334 audit(1768891066.738:296): prog-id=64 op=LOAD Jan 20 06:37:46.736000 audit: BPF prog-id=62 op=LOAD Jan 20 06:37:46.736000 audit: BPF prog-id=63 op=LOAD Jan 20 06:37:46.736000 audit: BPF prog-id=52 op=UNLOAD Jan 20 06:37:46.736000 audit: BPF prog-id=53 op=UNLOAD Jan 20 06:37:46.738000 audit: BPF prog-id=64 op=LOAD Jan 20 06:37:46.738000 audit: BPF prog-id=45 op=UNLOAD Jan 20 06:37:46.791416 kernel: audit: type=1334 audit(1768891066.738:297): prog-id=45 op=UNLOAD Jan 20 06:37:46.738000 audit: BPF prog-id=65 op=LOAD Jan 20 06:37:46.738000 audit: BPF prog-id=66 op=LOAD Jan 20 06:37:46.738000 audit: BPF prog-id=46 op=UNLOAD Jan 20 06:37:46.738000 audit: BPF prog-id=47 op=UNLOAD Jan 20 06:37:46.742000 audit: BPF prog-id=67 op=LOAD Jan 20 06:37:46.742000 audit: BPF prog-id=42 op=UNLOAD Jan 20 06:37:46.742000 audit: BPF prog-id=68 op=LOAD Jan 20 06:37:46.742000 audit: BPF prog-id=69 op=LOAD Jan 20 06:37:46.742000 audit: BPF prog-id=43 op=UNLOAD Jan 20 06:37:46.742000 audit: BPF prog-id=44 op=UNLOAD Jan 20 06:37:46.745000 audit: BPF prog-id=70 op=LOAD Jan 20 06:37:46.745000 audit: BPF prog-id=48 op=UNLOAD Jan 20 06:37:46.745000 audit: BPF prog-id=71 op=LOAD Jan 20 06:37:46.745000 audit: BPF prog-id=72 op=LOAD Jan 20 06:37:46.745000 audit: BPF prog-id=49 op=UNLOAD Jan 20 06:37:46.745000 audit: BPF prog-id=50 op=UNLOAD Jan 20 06:37:46.746000 audit: BPF prog-id=73 op=LOAD Jan 20 06:37:46.746000 audit: BPF prog-id=57 op=UNLOAD Jan 20 06:37:46.748000 audit: BPF prog-id=74 op=LOAD Jan 20 06:37:46.748000 audit: BPF prog-id=41 op=UNLOAD Jan 20 06:37:46.749000 audit: BPF prog-id=75 op=LOAD Jan 20 06:37:46.749000 audit: BPF prog-id=76 op=LOAD Jan 20 06:37:46.749000 audit: BPF prog-id=54 op=UNLOAD Jan 20 06:37:46.750000 audit: BPF prog-id=55 op=UNLOAD Jan 20 06:37:46.752000 audit: BPF prog-id=77 op=LOAD Jan 20 06:37:46.752000 audit: BPF prog-id=58 op=UNLOAD Jan 20 06:37:46.753000 audit: BPF prog-id=78 op=LOAD Jan 20 06:37:46.753000 audit: BPF prog-id=79 op=LOAD Jan 20 06:37:46.753000 audit: BPF prog-id=59 op=UNLOAD Jan 20 06:37:46.753000 audit: BPF prog-id=60 op=UNLOAD Jan 20 06:37:46.754000 audit: BPF prog-id=80 op=LOAD Jan 20 06:37:46.754000 audit: BPF prog-id=56 op=UNLOAD Jan 20 06:37:46.837591 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 06:37:46.837893 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 06:37:46.838686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:37:46.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:37:46.838888 systemd[1]: kubelet.service: Consumed 226ms CPU time, 98.5M memory peak. Jan 20 06:37:46.843283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:37:47.136876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:37:47.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:47.164902 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 06:37:47.295583 kubelet[2493]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:37:47.295583 kubelet[2493]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 06:37:47.295583 kubelet[2493]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:37:47.296374 kubelet[2493]: I0120 06:37:47.295570 2493 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 06:37:47.658592 kubelet[2493]: I0120 06:37:47.658357 2493 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 06:37:47.658592 kubelet[2493]: I0120 06:37:47.658455 2493 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 06:37:47.658892 kubelet[2493]: I0120 06:37:47.658882 2493 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 06:37:47.702643 kubelet[2493]: E0120 06:37:47.702412 2493 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:37:47.713156 kubelet[2493]: I0120 06:37:47.712572 2493 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 06:37:47.728673 kubelet[2493]: I0120 06:37:47.728563 2493 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 06:37:47.741876 kubelet[2493]: I0120 06:37:47.739707 2493 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 06:37:47.741876 kubelet[2493]: I0120 06:37:47.740625 2493 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 06:37:47.743207 kubelet[2493]: I0120 06:37:47.742567 2493 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 06:37:47.743207 kubelet[2493]: I0120 06:37:47.742986 2493 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 06:37:47.743207 kubelet[2493]: I0120 06:37:47.742998 2493 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 06:37:47.743536 kubelet[2493]: I0120 06:37:47.743293 2493 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:37:47.748559 kubelet[2493]: I0120 06:37:47.748386 2493 kubelet.go:446] "Attempting to sync node with API server" Jan 20 06:37:47.748559 kubelet[2493]: I0120 06:37:47.748466 2493 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 06:37:47.748559 kubelet[2493]: I0120 06:37:47.748490 2493 kubelet.go:352] "Adding apiserver pod source" Jan 20 06:37:47.748559 kubelet[2493]: I0120 06:37:47.748500 2493 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 06:37:47.755158 kubelet[2493]: W0120 06:37:47.754384 2493 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jan 20 06:37:47.755158 kubelet[2493]: E0120 06:37:47.754628 2493 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:37:47.755158 kubelet[2493]: W0120 06:37:47.754665 2493 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jan 20 06:37:47.755554 kubelet[2493]: E0120 06:37:47.755158 2493 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:37:47.757444 kubelet[2493]: I0120 06:37:47.757353 2493 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 06:37:47.758284 kubelet[2493]: I0120 06:37:47.757974 2493 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 06:37:47.760012 kubelet[2493]: W0120 06:37:47.759936 2493 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 06:37:47.766553 kubelet[2493]: I0120 06:37:47.766426 2493 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 06:37:47.767382 kubelet[2493]: I0120 06:37:47.767294 2493 server.go:1287] "Started kubelet" Jan 20 06:37:47.767498 kubelet[2493]: I0120 06:37:47.767466 2493 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 06:37:47.780112 kubelet[2493]: I0120 06:37:47.779956 2493 server.go:479] "Adding debug handlers to kubelet server" Jan 20 06:37:47.784512 kubelet[2493]: I0120 06:37:47.784333 2493 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 06:37:47.785366 kubelet[2493]: E0120 06:37:47.775516 2493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c5d0ddc955103 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 06:37:47.766493443 +0000 UTC m=+0.584835846,LastTimestamp:2026-01-20 06:37:47.766493443 +0000 UTC m=+0.584835846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 06:37:47.787012 kubelet[2493]: I0120 06:37:47.786919 2493 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 06:37:47.791227 kubelet[2493]: E0120 06:37:47.789974 2493 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 06:37:47.795378 kubelet[2493]: I0120 06:37:47.795274 2493 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 06:37:47.795535 kubelet[2493]: I0120 06:37:47.795373 2493 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 06:37:47.800991 kubelet[2493]: I0120 06:37:47.800849 2493 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 06:37:47.801872 kubelet[2493]: E0120 06:37:47.801581 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 06:37:47.801872 kubelet[2493]: I0120 06:37:47.801679 2493 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 06:37:47.801872 kubelet[2493]: I0120 06:37:47.801827 2493 reconciler.go:26] "Reconciler: start to sync state" Jan 20 06:37:47.802442 kubelet[2493]: E0120 06:37:47.801995 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Jan 20 06:37:47.803710 kubelet[2493]: W0120 06:37:47.802819 2493 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jan 20 06:37:47.803816 kubelet[2493]: E0120 06:37:47.803499 2493 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:37:47.804411 kubelet[2493]: I0120 06:37:47.804292 2493 factory.go:221] Registration of the systemd container factory successfully Jan 20 06:37:47.804569 kubelet[2493]: I0120 06:37:47.804438 2493 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 06:37:47.811422 kubelet[2493]: I0120 06:37:47.811173 2493 factory.go:221] Registration of the containerd container factory successfully Jan 20 06:37:47.819000 audit[2508]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:47.819000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff397d3010 a2=0 a3=0 items=0 ppid=2493 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 06:37:47.824000 audit[2509]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:47.824000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7dc0f4b0 a2=0 a3=0 items=0 ppid=2493 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.824000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 06:37:47.836000 audit[2511]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:47.836000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe566545c0 a2=0 a3=0 items=0 ppid=2493 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:37:47.845000 audit[2515]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:47.845000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffceb372dd0 a2=0 a3=0 items=0 ppid=2493 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:37:47.848910 kubelet[2493]: I0120 06:37:47.848810 2493 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 06:37:47.848910 kubelet[2493]: I0120 06:37:47.848887 2493 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 06:37:47.848910 kubelet[2493]: I0120 06:37:47.848910 2493 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:37:47.853576 kubelet[2493]: I0120 06:37:47.853445 2493 policy_none.go:49] "None policy: Start" Jan 20 06:37:47.853576 kubelet[2493]: I0120 06:37:47.853511 2493 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 06:37:47.853576 kubelet[2493]: I0120 06:37:47.853523 2493 state_mem.go:35] "Initializing new in-memory state store" Jan 20 06:37:47.867000 audit[2518]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:47.867000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe6b0caf40 a2=0 a3=0 items=0 ppid=2493 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 20 06:37:47.870384 kubelet[2493]: I0120 06:37:47.869275 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 06:37:47.873000 audit[2521]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:47.873000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1215b600 a2=0 a3=0 items=0 ppid=2493 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 06:37:47.875697 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 06:37:47.877000 audit[2520]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:47.877000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc2570e6d0 a2=0 a3=0 items=0 ppid=2493 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 06:37:47.878988 kubelet[2493]: I0120 06:37:47.878676 2493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 06:37:47.879293 kubelet[2493]: I0120 06:37:47.879196 2493 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 06:37:47.879620 kubelet[2493]: I0120 06:37:47.879529 2493 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 06:37:47.879620 kubelet[2493]: I0120 06:37:47.879607 2493 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 06:37:47.880199 kubelet[2493]: E0120 06:37:47.879928 2493 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 06:37:47.880000 audit[2522]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:47.880000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9be25ab0 a2=0 a3=0 items=0 ppid=2493 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 06:37:47.882376 kubelet[2493]: W0120 06:37:47.881556 2493 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jan 20 06:37:47.882376 kubelet[2493]: E0120 06:37:47.881974 2493 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:37:47.885000 audit[2523]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:47.885000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1b2c2c50 a2=0 a3=0 items=0 ppid=2493 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 06:37:47.886000 audit[2525]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:37:47.886000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf3754970 a2=0 a3=0 items=0 ppid=2493 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 06:37:47.890000 audit[2527]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:47.890000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd404e4440 a2=0 a3=0 items=0 ppid=2493 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 06:37:47.893019 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 06:37:47.896000 audit[2528]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:37:47.896000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5247b4d0 a2=0 a3=0 items=0 ppid=2493 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:47.896000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 06:37:47.902009 kubelet[2493]: E0120 06:37:47.901991 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 06:37:47.903376 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 06:37:47.914950 kubelet[2493]: I0120 06:37:47.914677 2493 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 06:37:47.916376 kubelet[2493]: I0120 06:37:47.915544 2493 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 06:37:47.916376 kubelet[2493]: I0120 06:37:47.915628 2493 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 06:37:47.918218 kubelet[2493]: I0120 06:37:47.916847 2493 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 06:37:47.927395 kubelet[2493]: E0120 06:37:47.927369 2493 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 06:37:47.927465 kubelet[2493]: E0120 06:37:47.927413 2493 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 06:37:48.002547 kubelet[2493]: I0120 06:37:48.002465 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10dbc504bce4246ec6e31ddd7f34da4c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"10dbc504bce4246ec6e31ddd7f34da4c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:48.002547 kubelet[2493]: I0120 06:37:48.002548 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:48.002734 kubelet[2493]: I0120 06:37:48.002566 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:48.002734 kubelet[2493]: I0120 06:37:48.002581 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 06:37:48.002734 kubelet[2493]: I0120 06:37:48.002601 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10dbc504bce4246ec6e31ddd7f34da4c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"10dbc504bce4246ec6e31ddd7f34da4c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:48.002734 kubelet[2493]: I0120 06:37:48.002619 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10dbc504bce4246ec6e31ddd7f34da4c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"10dbc504bce4246ec6e31ddd7f34da4c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:48.003808 kubelet[2493]: I0120 06:37:48.003232 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:48.004468 kubelet[2493]: E0120 06:37:48.004341 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Jan 20 06:37:48.005277 kubelet[2493]: I0120 06:37:48.005169 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:48.005277 kubelet[2493]: I0120 06:37:48.005255 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:48.011239 systemd[1]: Created slice kubepods-burstable-pod10dbc504bce4246ec6e31ddd7f34da4c.slice - libcontainer container kubepods-burstable-pod10dbc504bce4246ec6e31ddd7f34da4c.slice. Jan 20 06:37:48.025431 kubelet[2493]: I0120 06:37:48.025130 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 06:37:48.026395 kubelet[2493]: E0120 06:37:48.025958 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jan 20 06:37:48.028391 kubelet[2493]: E0120 06:37:48.028286 2493 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 06:37:48.030889 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 20 06:37:48.037006 kubelet[2493]: E0120 06:37:48.036985 2493 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 06:37:48.041530 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 20 06:37:48.044950 kubelet[2493]: E0120 06:37:48.044854 2493 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 06:37:48.230445 kubelet[2493]: I0120 06:37:48.230200 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 06:37:48.231134 kubelet[2493]: E0120 06:37:48.230878 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jan 20 06:37:48.329975 kubelet[2493]: E0120 06:37:48.329897 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:48.331906 containerd[1645]: time="2026-01-20T06:37:48.331650343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:10dbc504bce4246ec6e31ddd7f34da4c,Namespace:kube-system,Attempt:0,}" Jan 20 06:37:48.338989 kubelet[2493]: E0120 06:37:48.338869 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:48.340691 containerd[1645]: time="2026-01-20T06:37:48.340643293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 06:37:48.347242 kubelet[2493]: E0120 06:37:48.346684 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:48.348350 containerd[1645]: time="2026-01-20T06:37:48.348318760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 06:37:48.407936 kubelet[2493]: E0120 06:37:48.405643 2493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Jan 20 06:37:48.412514 containerd[1645]: time="2026-01-20T06:37:48.412377744Z" level=info msg="connecting to shim bf845a5d804996b9eada5dedc2a62e533a8a9a345fa36e9847905322a9db71bb" address="unix:///run/containerd/s/ca24775d8831d3cc391a33d7204a8eab09d572f1379220745850b2796ab04a98" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:37:48.418334 containerd[1645]: time="2026-01-20T06:37:48.418222469Z" level=info msg="connecting to shim ce4e4784ccfce8c4f74322b37895b6246fa07cf8059c391ce02aab88904c9d42" address="unix:///run/containerd/s/aeee8d52c7fe09c025dcebb34e16f5d11a58f29739da56b5cd454c4f8c5d3eb1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:37:48.445969 containerd[1645]: time="2026-01-20T06:37:48.443449581Z" level=info msg="connecting to shim 7d42374fc92bcc27be049cf837bbc0c1b27735c1b8e08eb12ec1d41fab459813" address="unix:///run/containerd/s/c7a5ef023974da9c19614eefbed575b985f8e4291edde1f5355dd06709992bc0" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:37:48.522702 systemd[1]: Started cri-containerd-ce4e4784ccfce8c4f74322b37895b6246fa07cf8059c391ce02aab88904c9d42.scope - libcontainer container ce4e4784ccfce8c4f74322b37895b6246fa07cf8059c391ce02aab88904c9d42. Jan 20 06:37:48.538840 systemd[1]: Started cri-containerd-7d42374fc92bcc27be049cf837bbc0c1b27735c1b8e08eb12ec1d41fab459813.scope - libcontainer container 7d42374fc92bcc27be049cf837bbc0c1b27735c1b8e08eb12ec1d41fab459813. Jan 20 06:37:48.542905 systemd[1]: Started cri-containerd-bf845a5d804996b9eada5dedc2a62e533a8a9a345fa36e9847905322a9db71bb.scope - libcontainer container bf845a5d804996b9eada5dedc2a62e533a8a9a345fa36e9847905322a9db71bb. Jan 20 06:37:48.562000 audit: BPF prog-id=81 op=LOAD Jan 20 06:37:48.563000 audit: BPF prog-id=82 op=LOAD Jan 20 06:37:48.563000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2548 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365346534373834636366636538633466373433323262333738393562 Jan 20 06:37:48.563000 audit: BPF prog-id=82 op=UNLOAD Jan 20 06:37:48.563000 audit[2587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365346534373834636366636538633466373433323262333738393562 Jan 20 06:37:48.563000 audit: BPF prog-id=83 op=LOAD Jan 20 06:37:48.563000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2548 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365346534373834636366636538633466373433323262333738393562 Jan 20 06:37:48.563000 audit: BPF prog-id=84 op=LOAD Jan 20 06:37:48.563000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2548 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365346534373834636366636538633466373433323262333738393562 Jan 20 06:37:48.563000 audit: BPF prog-id=84 op=UNLOAD Jan 20 06:37:48.563000 audit[2587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365346534373834636366636538633466373433323262333738393562 Jan 20 06:37:48.563000 audit: BPF prog-id=83 op=UNLOAD Jan 20 06:37:48.563000 audit[2587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365346534373834636366636538633466373433323262333738393562 Jan 20 06:37:48.563000 audit: BPF prog-id=85 op=LOAD Jan 20 06:37:48.563000 audit[2587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2548 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365346534373834636366636538633466373433323262333738393562 Jan 20 06:37:48.580000 audit: BPF prog-id=86 op=LOAD Jan 20 06:37:48.582000 audit: BPF prog-id=87 op=LOAD Jan 20 06:37:48.582000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2573 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343233373466633932626363323762653034396366383337626263 Jan 20 06:37:48.582000 audit: BPF prog-id=87 op=UNLOAD Jan 20 06:37:48.582000 audit[2589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343233373466633932626363323762653034396366383337626263 Jan 20 06:37:48.583000 audit: BPF prog-id=88 op=LOAD Jan 20 06:37:48.583000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2573 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343233373466633932626363323762653034396366383337626263 Jan 20 06:37:48.583000 audit: BPF prog-id=89 op=LOAD Jan 20 06:37:48.583000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2573 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343233373466633932626363323762653034396366383337626263 Jan 20 06:37:48.583000 audit: BPF prog-id=89 op=UNLOAD Jan 20 06:37:48.583000 audit[2589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343233373466633932626363323762653034396366383337626263 Jan 20 06:37:48.583000 audit: BPF prog-id=88 op=UNLOAD Jan 20 06:37:48.583000 audit[2589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343233373466633932626363323762653034396366383337626263 Jan 20 06:37:48.583000 audit: BPF prog-id=90 op=LOAD Jan 20 06:37:48.583000 audit[2589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2573 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764343233373466633932626363323762653034396366383337626263 Jan 20 06:37:48.588000 audit: BPF prog-id=91 op=LOAD Jan 20 06:37:48.589000 audit: BPF prog-id=92 op=LOAD Jan 20 06:37:48.589000 audit[2582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=2543 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266383435613564383034393936623965616461356465646332613632 Jan 20 06:37:48.589000 audit: BPF prog-id=92 op=UNLOAD Jan 20 06:37:48.589000 audit[2582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266383435613564383034393936623965616461356465646332613632 Jan 20 06:37:48.591000 audit: BPF prog-id=93 op=LOAD Jan 20 06:37:48.591000 audit[2582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=2543 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.591000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266383435613564383034393936623965616461356465646332613632 Jan 20 06:37:48.592000 audit: BPF prog-id=94 op=LOAD Jan 20 06:37:48.592000 audit[2582]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=2543 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266383435613564383034393936623965616461356465646332613632 Jan 20 06:37:48.592000 audit: BPF prog-id=94 op=UNLOAD Jan 20 06:37:48.592000 audit[2582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266383435613564383034393936623965616461356465646332613632 Jan 20 06:37:48.593000 audit: BPF prog-id=93 op=UNLOAD Jan 20 06:37:48.593000 audit[2582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266383435613564383034393936623965616461356465646332613632 Jan 20 06:37:48.593000 audit: BPF prog-id=95 op=LOAD Jan 20 06:37:48.593000 audit[2582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=2543 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266383435613564383034393936623965616461356465646332613632 Jan 20 06:37:48.638469 kubelet[2493]: I0120 06:37:48.638386 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 06:37:48.638944 kubelet[2493]: E0120 06:37:48.638877 2493 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jan 20 06:37:48.684440 containerd[1645]: time="2026-01-20T06:37:48.684234445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d42374fc92bcc27be049cf837bbc0c1b27735c1b8e08eb12ec1d41fab459813\"" Jan 20 06:37:48.694455 containerd[1645]: time="2026-01-20T06:37:48.694164925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce4e4784ccfce8c4f74322b37895b6246fa07cf8059c391ce02aab88904c9d42\"" Jan 20 06:37:48.697216 kubelet[2493]: E0120 06:37:48.696604 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:48.697216 kubelet[2493]: E0120 06:37:48.696711 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:48.705729 containerd[1645]: time="2026-01-20T06:37:48.705632957Z" level=info msg="CreateContainer within sandbox \"7d42374fc92bcc27be049cf837bbc0c1b27735c1b8e08eb12ec1d41fab459813\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 06:37:48.706593 containerd[1645]: time="2026-01-20T06:37:48.706477741Z" level=info msg="CreateContainer within sandbox \"ce4e4784ccfce8c4f74322b37895b6246fa07cf8059c391ce02aab88904c9d42\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 06:37:48.711963 containerd[1645]: time="2026-01-20T06:37:48.711520974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:10dbc504bce4246ec6e31ddd7f34da4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf845a5d804996b9eada5dedc2a62e533a8a9a345fa36e9847905322a9db71bb\"" Jan 20 06:37:48.713590 kubelet[2493]: E0120 06:37:48.713553 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:48.717506 containerd[1645]: time="2026-01-20T06:37:48.717314232Z" level=info msg="CreateContainer within sandbox \"bf845a5d804996b9eada5dedc2a62e533a8a9a345fa36e9847905322a9db71bb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 06:37:48.730551 containerd[1645]: time="2026-01-20T06:37:48.730516603Z" level=info msg="Container 817774911811a2f0dd5bc2d2ca94a82a2c3fe141bf959ee99585b040f842218c: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:37:48.737187 containerd[1645]: time="2026-01-20T06:37:48.736971548Z" level=info msg="Container 05f8f5fcb7af315794dc670244c51f5abe0248d2f501de55ff5afac04d44dcfd: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:37:48.746466 containerd[1645]: time="2026-01-20T06:37:48.746350116Z" level=info msg="CreateContainer within sandbox \"7d42374fc92bcc27be049cf837bbc0c1b27735c1b8e08eb12ec1d41fab459813\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"817774911811a2f0dd5bc2d2ca94a82a2c3fe141bf959ee99585b040f842218c\"" Jan 20 06:37:48.750378 containerd[1645]: time="2026-01-20T06:37:48.750236743Z" level=info msg="StartContainer for \"817774911811a2f0dd5bc2d2ca94a82a2c3fe141bf959ee99585b040f842218c\"" Jan 20 06:37:48.751446 containerd[1645]: time="2026-01-20T06:37:48.750530385Z" level=info msg="Container df5b598b08f732d3415b5f06e1c682c19cf376d4f184dd540bf310fa9960e775: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:37:48.754424 containerd[1645]: time="2026-01-20T06:37:48.754315302Z" level=info msg="connecting to shim 817774911811a2f0dd5bc2d2ca94a82a2c3fe141bf959ee99585b040f842218c" address="unix:///run/containerd/s/c7a5ef023974da9c19614eefbed575b985f8e4291edde1f5355dd06709992bc0" protocol=ttrpc version=3 Jan 20 06:37:48.765002 containerd[1645]: time="2026-01-20T06:37:48.764951424Z" level=info msg="CreateContainer within sandbox \"ce4e4784ccfce8c4f74322b37895b6246fa07cf8059c391ce02aab88904c9d42\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"05f8f5fcb7af315794dc670244c51f5abe0248d2f501de55ff5afac04d44dcfd\"" Jan 20 06:37:48.770630 containerd[1645]: time="2026-01-20T06:37:48.770599232Z" level=info msg="StartContainer for \"05f8f5fcb7af315794dc670244c51f5abe0248d2f501de55ff5afac04d44dcfd\"" Jan 20 06:37:48.772914 kubelet[2493]: W0120 06:37:48.772686 2493 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jan 20 06:37:48.773640 kubelet[2493]: E0120 06:37:48.773415 2493 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:37:48.774210 containerd[1645]: time="2026-01-20T06:37:48.773946768Z" level=info msg="connecting to shim 05f8f5fcb7af315794dc670244c51f5abe0248d2f501de55ff5afac04d44dcfd" address="unix:///run/containerd/s/aeee8d52c7fe09c025dcebb34e16f5d11a58f29739da56b5cd454c4f8c5d3eb1" protocol=ttrpc version=3 Jan 20 06:37:48.777592 containerd[1645]: time="2026-01-20T06:37:48.776393271Z" level=info msg="CreateContainer within sandbox \"bf845a5d804996b9eada5dedc2a62e533a8a9a345fa36e9847905322a9db71bb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"df5b598b08f732d3415b5f06e1c682c19cf376d4f184dd540bf310fa9960e775\"" Jan 20 06:37:48.782169 containerd[1645]: time="2026-01-20T06:37:48.781486788Z" level=info msg="StartContainer for \"df5b598b08f732d3415b5f06e1c682c19cf376d4f184dd540bf310fa9960e775\"" Jan 20 06:37:48.783976 containerd[1645]: time="2026-01-20T06:37:48.783949189Z" level=info msg="connecting to shim df5b598b08f732d3415b5f06e1c682c19cf376d4f184dd540bf310fa9960e775" address="unix:///run/containerd/s/ca24775d8831d3cc391a33d7204a8eab09d572f1379220745850b2796ab04a98" protocol=ttrpc version=3 Jan 20 06:37:48.806514 systemd[1]: Started cri-containerd-817774911811a2f0dd5bc2d2ca94a82a2c3fe141bf959ee99585b040f842218c.scope - libcontainer container 817774911811a2f0dd5bc2d2ca94a82a2c3fe141bf959ee99585b040f842218c. Jan 20 06:37:48.831674 systemd[1]: Started cri-containerd-df5b598b08f732d3415b5f06e1c682c19cf376d4f184dd540bf310fa9960e775.scope - libcontainer container df5b598b08f732d3415b5f06e1c682c19cf376d4f184dd540bf310fa9960e775. Jan 20 06:37:48.850986 systemd[1]: Started cri-containerd-05f8f5fcb7af315794dc670244c51f5abe0248d2f501de55ff5afac04d44dcfd.scope - libcontainer container 05f8f5fcb7af315794dc670244c51f5abe0248d2f501de55ff5afac04d44dcfd. Jan 20 06:37:48.856000 audit: BPF prog-id=96 op=LOAD Jan 20 06:37:48.857000 audit: BPF prog-id=97 op=LOAD Jan 20 06:37:48.857000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2573 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831373737343931313831316132663064643562633264326361393461 Jan 20 06:37:48.857000 audit: BPF prog-id=97 op=UNLOAD Jan 20 06:37:48.857000 audit[2668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831373737343931313831316132663064643562633264326361393461 Jan 20 06:37:48.857000 audit: BPF prog-id=98 op=LOAD Jan 20 06:37:48.857000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2573 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831373737343931313831316132663064643562633264326361393461 Jan 20 06:37:48.858000 audit: BPF prog-id=99 op=LOAD Jan 20 06:37:48.858000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2573 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831373737343931313831316132663064643562633264326361393461 Jan 20 06:37:48.858000 audit: BPF prog-id=99 op=UNLOAD Jan 20 06:37:48.858000 audit[2668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831373737343931313831316132663064643562633264326361393461 Jan 20 06:37:48.858000 audit: BPF prog-id=98 op=UNLOAD Jan 20 06:37:48.858000 audit[2668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831373737343931313831316132663064643562633264326361393461 Jan 20 06:37:48.858000 audit: BPF prog-id=100 op=LOAD Jan 20 06:37:48.858000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2573 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831373737343931313831316132663064643562633264326361393461 Jan 20 06:37:48.868000 audit: BPF prog-id=101 op=LOAD Jan 20 06:37:48.869000 audit: BPF prog-id=102 op=LOAD Jan 20 06:37:48.869000 audit[2684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2543 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466356235393862303866373332643334313562356630366531633638 Jan 20 06:37:48.870000 audit: BPF prog-id=102 op=UNLOAD Jan 20 06:37:48.870000 audit[2684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466356235393862303866373332643334313562356630366531633638 Jan 20 06:37:48.871000 audit: BPF prog-id=103 op=LOAD Jan 20 06:37:48.871000 audit[2684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2543 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466356235393862303866373332643334313562356630366531633638 Jan 20 06:37:48.871000 audit: BPF prog-id=104 op=LOAD Jan 20 06:37:48.871000 audit[2684]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2543 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466356235393862303866373332643334313562356630366531633638 Jan 20 06:37:48.872000 audit: BPF prog-id=104 op=UNLOAD Jan 20 06:37:48.872000 audit[2684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466356235393862303866373332643334313562356630366531633638 Jan 20 06:37:48.872000 audit: BPF prog-id=103 op=UNLOAD Jan 20 06:37:48.872000 audit[2684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2543 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466356235393862303866373332643334313562356630366531633638 Jan 20 06:37:48.872000 audit: BPF prog-id=105 op=LOAD Jan 20 06:37:48.872000 audit[2684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2543 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466356235393862303866373332643334313562356630366531633638 Jan 20 06:37:48.919000 audit: BPF prog-id=106 op=LOAD Jan 20 06:37:48.920000 audit: BPF prog-id=107 op=LOAD Jan 20 06:37:48.920000 audit[2683]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2548 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035663866356663623761663331353739346463363730323434633531 Jan 20 06:37:48.920000 audit: BPF prog-id=107 op=UNLOAD Jan 20 06:37:48.920000 audit[2683]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035663866356663623761663331353739346463363730323434633531 Jan 20 06:37:48.921000 audit: BPF prog-id=108 op=LOAD Jan 20 06:37:48.921000 audit[2683]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2548 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035663866356663623761663331353739346463363730323434633531 Jan 20 06:37:48.924000 audit: BPF prog-id=109 op=LOAD Jan 20 06:37:48.924000 audit[2683]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2548 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035663866356663623761663331353739346463363730323434633531 Jan 20 06:37:48.924000 audit: BPF prog-id=109 op=UNLOAD Jan 20 06:37:48.924000 audit[2683]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035663866356663623761663331353739346463363730323434633531 Jan 20 06:37:48.924000 audit: BPF prog-id=108 op=UNLOAD Jan 20 06:37:48.924000 audit[2683]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2548 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035663866356663623761663331353739346463363730323434633531 Jan 20 06:37:48.924000 audit: BPF prog-id=110 op=LOAD Jan 20 06:37:48.924000 audit[2683]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2548 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:48.924000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035663866356663623761663331353739346463363730323434633531 Jan 20 06:37:49.001008 containerd[1645]: time="2026-01-20T06:37:49.000834895Z" level=info msg="StartContainer for \"817774911811a2f0dd5bc2d2ca94a82a2c3fe141bf959ee99585b040f842218c\" returns successfully" Jan 20 06:37:49.004589 containerd[1645]: time="2026-01-20T06:37:49.004450641Z" level=info msg="StartContainer for \"df5b598b08f732d3415b5f06e1c682c19cf376d4f184dd540bf310fa9960e775\" returns successfully" Jan 20 06:37:49.066559 containerd[1645]: time="2026-01-20T06:37:49.066434244Z" level=info msg="StartContainer for \"05f8f5fcb7af315794dc670244c51f5abe0248d2f501de55ff5afac04d44dcfd\" returns successfully" Jan 20 06:37:49.076887 kubelet[2493]: W0120 06:37:49.075935 2493 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jan 20 06:37:49.076887 kubelet[2493]: E0120 06:37:49.076332 2493 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:37:49.443983 kubelet[2493]: I0120 06:37:49.443666 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 06:37:49.970993 kubelet[2493]: E0120 06:37:49.970885 2493 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 06:37:49.972513 kubelet[2493]: E0120 06:37:49.972425 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:49.982669 kubelet[2493]: E0120 06:37:49.982550 2493 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 06:37:49.983021 kubelet[2493]: E0120 06:37:49.982913 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:50.003295 kubelet[2493]: E0120 06:37:49.999155 2493 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 06:37:50.005299 kubelet[2493]: E0120 06:37:50.004485 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:50.799182 kubelet[2493]: I0120 06:37:50.798577 2493 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 06:37:50.802222 kubelet[2493]: I0120 06:37:50.801982 2493 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:50.869398 kubelet[2493]: E0120 06:37:50.869300 2493 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:50.869398 kubelet[2493]: I0120 06:37:50.869333 2493 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:50.872626 kubelet[2493]: E0120 06:37:50.872502 2493 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:50.872626 kubelet[2493]: I0120 06:37:50.872597 2493 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 06:37:50.877293 kubelet[2493]: E0120 06:37:50.877188 2493 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 06:37:50.997448 kubelet[2493]: I0120 06:37:50.996870 2493 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 06:37:50.997448 kubelet[2493]: I0120 06:37:50.997684 2493 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:50.997448 kubelet[2493]: I0120 06:37:50.998195 2493 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:51.003734 kubelet[2493]: E0120 06:37:51.001181 2493 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:51.003734 kubelet[2493]: E0120 06:37:51.001562 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:51.003734 kubelet[2493]: E0120 06:37:51.001578 2493 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 06:37:51.003734 kubelet[2493]: E0120 06:37:51.001707 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:51.004672 kubelet[2493]: E0120 06:37:51.004306 2493 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:51.004672 kubelet[2493]: E0120 06:37:51.004654 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:51.755984 kubelet[2493]: I0120 06:37:51.755275 2493 apiserver.go:52] "Watching apiserver" Jan 20 06:37:51.802552 kubelet[2493]: I0120 06:37:51.802241 2493 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 06:37:52.003260 kubelet[2493]: I0120 06:37:52.002984 2493 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:52.003402 kubelet[2493]: I0120 06:37:52.003367 2493 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 06:37:52.003452 kubelet[2493]: I0120 06:37:52.003220 2493 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:52.014327 kubelet[2493]: E0120 06:37:52.012977 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:52.015855 kubelet[2493]: E0120 06:37:52.015502 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:52.017333 kubelet[2493]: E0120 06:37:52.017312 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:53.006018 kubelet[2493]: E0120 06:37:53.005914 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:53.006894 kubelet[2493]: E0120 06:37:53.006012 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:53.007448 kubelet[2493]: E0120 06:37:53.006652 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:53.633504 systemd[1]: Reload requested from client PID 2774 ('systemctl') (unit session-8.scope)... Jan 20 06:37:53.633522 systemd[1]: Reloading... Jan 20 06:37:53.864219 zram_generator::config[2823]: No configuration found. Jan 20 06:37:54.322312 systemd[1]: Reloading finished in 688 ms. Jan 20 06:37:54.393346 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:37:54.407550 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 06:37:54.408284 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:37:54.408480 systemd[1]: kubelet.service: Consumed 1.864s CPU time, 132.6M memory peak. Jan 20 06:37:54.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:54.414437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:37:54.438563 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 20 06:37:54.438692 kernel: audit: type=1131 audit(1768891074.407:392): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:54.416000 audit: BPF prog-id=111 op=LOAD Jan 20 06:37:54.447238 kernel: audit: type=1334 audit(1768891074.416:393): prog-id=111 op=LOAD Jan 20 06:37:54.447303 kernel: audit: type=1334 audit(1768891074.416:394): prog-id=80 op=UNLOAD Jan 20 06:37:54.416000 audit: BPF prog-id=80 op=UNLOAD Jan 20 06:37:54.417000 audit: BPF prog-id=112 op=LOAD Jan 20 06:37:54.462298 kernel: audit: type=1334 audit(1768891074.417:395): prog-id=112 op=LOAD Jan 20 06:37:54.462368 kernel: audit: type=1334 audit(1768891074.417:396): prog-id=61 op=UNLOAD Jan 20 06:37:54.417000 audit: BPF prog-id=61 op=UNLOAD Jan 20 06:37:54.418000 audit: BPF prog-id=113 op=LOAD Jan 20 06:37:54.476249 kernel: audit: type=1334 audit(1768891074.418:397): prog-id=113 op=LOAD Jan 20 06:37:54.476302 kernel: audit: type=1334 audit(1768891074.418:398): prog-id=114 op=LOAD Jan 20 06:37:54.418000 audit: BPF prog-id=114 op=LOAD Jan 20 06:37:54.418000 audit: BPF prog-id=62 op=UNLOAD Jan 20 06:37:54.488425 kernel: audit: type=1334 audit(1768891074.418:399): prog-id=62 op=UNLOAD Jan 20 06:37:54.418000 audit: BPF prog-id=63 op=UNLOAD Jan 20 06:37:54.494655 kernel: audit: type=1334 audit(1768891074.418:400): prog-id=63 op=UNLOAD Jan 20 06:37:54.494700 kernel: audit: type=1334 audit(1768891074.419:401): prog-id=115 op=LOAD Jan 20 06:37:54.419000 audit: BPF prog-id=115 op=LOAD Jan 20 06:37:54.419000 audit: BPF prog-id=64 op=UNLOAD Jan 20 06:37:54.420000 audit: BPF prog-id=116 op=LOAD Jan 20 06:37:54.420000 audit: BPF prog-id=117 op=LOAD Jan 20 06:37:54.420000 audit: BPF prog-id=65 op=UNLOAD Jan 20 06:37:54.420000 audit: BPF prog-id=66 op=UNLOAD Jan 20 06:37:54.421000 audit: BPF prog-id=118 op=LOAD Jan 20 06:37:54.421000 audit: BPF prog-id=73 op=UNLOAD Jan 20 06:37:54.423000 audit: BPF prog-id=119 op=LOAD Jan 20 06:37:54.423000 audit: BPF prog-id=67 op=UNLOAD Jan 20 06:37:54.423000 audit: BPF prog-id=120 op=LOAD Jan 20 06:37:54.423000 audit: BPF prog-id=121 op=LOAD Jan 20 06:37:54.423000 audit: BPF prog-id=68 op=UNLOAD Jan 20 06:37:54.423000 audit: BPF prog-id=69 op=UNLOAD Jan 20 06:37:54.425000 audit: BPF prog-id=122 op=LOAD Jan 20 06:37:54.425000 audit: BPF prog-id=123 op=LOAD Jan 20 06:37:54.425000 audit: BPF prog-id=75 op=UNLOAD Jan 20 06:37:54.425000 audit: BPF prog-id=76 op=UNLOAD Jan 20 06:37:54.426000 audit: BPF prog-id=124 op=LOAD Jan 20 06:37:54.426000 audit: BPF prog-id=70 op=UNLOAD Jan 20 06:37:54.426000 audit: BPF prog-id=125 op=LOAD Jan 20 06:37:54.427000 audit: BPF prog-id=126 op=LOAD Jan 20 06:37:54.427000 audit: BPF prog-id=71 op=UNLOAD Jan 20 06:37:54.427000 audit: BPF prog-id=72 op=UNLOAD Jan 20 06:37:54.429000 audit: BPF prog-id=127 op=LOAD Jan 20 06:37:54.429000 audit: BPF prog-id=74 op=UNLOAD Jan 20 06:37:54.433000 audit: BPF prog-id=128 op=LOAD Jan 20 06:37:54.433000 audit: BPF prog-id=77 op=UNLOAD Jan 20 06:37:54.433000 audit: BPF prog-id=129 op=LOAD Jan 20 06:37:54.433000 audit: BPF prog-id=130 op=LOAD Jan 20 06:37:54.433000 audit: BPF prog-id=78 op=UNLOAD Jan 20 06:37:54.433000 audit: BPF prog-id=79 op=UNLOAD Jan 20 06:37:54.768332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:37:54.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:37:54.781564 (kubelet)[2865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 06:37:54.888497 kubelet[2865]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:37:54.888497 kubelet[2865]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 06:37:54.888497 kubelet[2865]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:37:54.888981 kubelet[2865]: I0120 06:37:54.888539 2865 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 06:37:54.904479 kubelet[2865]: I0120 06:37:54.904020 2865 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 06:37:54.904479 kubelet[2865]: I0120 06:37:54.904236 2865 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 06:37:54.904479 kubelet[2865]: I0120 06:37:54.904531 2865 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 06:37:54.913170 kubelet[2865]: I0120 06:37:54.912188 2865 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 06:37:54.921247 kubelet[2865]: I0120 06:37:54.921206 2865 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 06:37:54.933018 kubelet[2865]: I0120 06:37:54.932930 2865 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 06:37:54.944261 kubelet[2865]: I0120 06:37:54.943953 2865 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 06:37:54.945274 kubelet[2865]: I0120 06:37:54.945180 2865 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 06:37:54.945436 kubelet[2865]: I0120 06:37:54.945212 2865 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 06:37:54.945436 kubelet[2865]: I0120 06:37:54.945391 2865 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 06:37:54.945436 kubelet[2865]: I0120 06:37:54.945400 2865 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 06:37:54.945663 kubelet[2865]: I0120 06:37:54.945444 2865 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:37:54.945689 kubelet[2865]: I0120 06:37:54.945663 2865 kubelet.go:446] "Attempting to sync node with API server" Jan 20 06:37:54.945711 kubelet[2865]: I0120 06:37:54.945696 2865 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 06:37:54.945730 kubelet[2865]: I0120 06:37:54.945721 2865 kubelet.go:352] "Adding apiserver pod source" Jan 20 06:37:54.945754 kubelet[2865]: I0120 06:37:54.945734 2865 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 06:37:54.950010 kubelet[2865]: I0120 06:37:54.949591 2865 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 06:37:54.960663 kubelet[2865]: I0120 06:37:54.960537 2865 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 06:37:54.966192 kubelet[2865]: I0120 06:37:54.963206 2865 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 06:37:54.966192 kubelet[2865]: I0120 06:37:54.963249 2865 server.go:1287] "Started kubelet" Jan 20 06:37:54.970951 kubelet[2865]: I0120 06:37:54.969650 2865 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 06:37:54.971479 kubelet[2865]: I0120 06:37:54.971379 2865 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 06:37:54.971652 kubelet[2865]: I0120 06:37:54.971519 2865 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 06:37:54.973253 kubelet[2865]: I0120 06:37:54.972997 2865 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 06:37:54.974314 kubelet[2865]: I0120 06:37:54.974281 2865 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 06:37:54.976204 kubelet[2865]: I0120 06:37:54.975987 2865 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 06:37:54.976522 kubelet[2865]: E0120 06:37:54.976330 2865 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 06:37:54.980310 kubelet[2865]: I0120 06:37:54.977314 2865 server.go:479] "Adding debug handlers to kubelet server" Jan 20 06:37:54.980310 kubelet[2865]: I0120 06:37:54.978400 2865 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 06:37:54.980310 kubelet[2865]: I0120 06:37:54.978517 2865 reconciler.go:26] "Reconciler: start to sync state" Jan 20 06:37:54.985916 kubelet[2865]: I0120 06:37:54.985891 2865 factory.go:221] Registration of the systemd container factory successfully Jan 20 06:37:54.990216 kubelet[2865]: E0120 06:37:54.990190 2865 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 06:37:55.005929 kubelet[2865]: I0120 06:37:55.005743 2865 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 06:37:55.017995 kubelet[2865]: I0120 06:37:55.017851 2865 factory.go:221] Registration of the containerd container factory successfully Jan 20 06:37:55.055991 kubelet[2865]: I0120 06:37:55.055370 2865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 06:37:55.068415 kubelet[2865]: I0120 06:37:55.068292 2865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 06:37:55.068415 kubelet[2865]: I0120 06:37:55.068329 2865 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 06:37:55.068415 kubelet[2865]: I0120 06:37:55.068353 2865 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 06:37:55.068740 kubelet[2865]: I0120 06:37:55.068562 2865 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 06:37:55.069524 kubelet[2865]: E0120 06:37:55.069362 2865 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 06:37:55.122618 kubelet[2865]: I0120 06:37:55.122505 2865 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 06:37:55.122618 kubelet[2865]: I0120 06:37:55.122528 2865 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 06:37:55.122618 kubelet[2865]: I0120 06:37:55.122550 2865 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:37:55.122960 kubelet[2865]: I0120 06:37:55.122717 2865 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 06:37:55.122960 kubelet[2865]: I0120 06:37:55.122729 2865 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 06:37:55.122960 kubelet[2865]: I0120 06:37:55.122746 2865 policy_none.go:49] "None policy: Start" Jan 20 06:37:55.122960 kubelet[2865]: I0120 06:37:55.122755 2865 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 06:37:55.122960 kubelet[2865]: I0120 06:37:55.122862 2865 state_mem.go:35] "Initializing new in-memory state store" Jan 20 06:37:55.123313 kubelet[2865]: I0120 06:37:55.122999 2865 state_mem.go:75] "Updated machine memory state" Jan 20 06:37:55.158139 kubelet[2865]: I0120 06:37:55.157993 2865 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 06:37:55.159153 kubelet[2865]: I0120 06:37:55.158389 2865 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 06:37:55.159153 kubelet[2865]: I0120 06:37:55.158473 2865 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 06:37:55.160376 kubelet[2865]: I0120 06:37:55.159904 2865 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 06:37:55.162535 kubelet[2865]: E0120 06:37:55.162302 2865 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 06:37:55.172692 kubelet[2865]: I0120 06:37:55.172421 2865 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 06:37:55.173712 kubelet[2865]: I0120 06:37:55.173454 2865 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:55.175322 kubelet[2865]: I0120 06:37:55.175006 2865 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:55.191511 kubelet[2865]: E0120 06:37:55.191417 2865 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:55.202419 kubelet[2865]: E0120 06:37:55.202329 2865 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:55.204874 kubelet[2865]: E0120 06:37:55.202995 2865 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 06:37:55.280579 kubelet[2865]: I0120 06:37:55.279324 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:55.280579 kubelet[2865]: I0120 06:37:55.279536 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:55.280579 kubelet[2865]: I0120 06:37:55.279556 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:55.280579 kubelet[2865]: I0120 06:37:55.279573 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:55.280579 kubelet[2865]: I0120 06:37:55.279586 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 06:37:55.280991 kubelet[2865]: I0120 06:37:55.279601 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 06:37:55.280991 kubelet[2865]: I0120 06:37:55.279613 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10dbc504bce4246ec6e31ddd7f34da4c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"10dbc504bce4246ec6e31ddd7f34da4c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:55.280991 kubelet[2865]: I0120 06:37:55.279625 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10dbc504bce4246ec6e31ddd7f34da4c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"10dbc504bce4246ec6e31ddd7f34da4c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:55.280991 kubelet[2865]: I0120 06:37:55.279638 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10dbc504bce4246ec6e31ddd7f34da4c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"10dbc504bce4246ec6e31ddd7f34da4c\") " pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:55.310249 kubelet[2865]: I0120 06:37:55.309404 2865 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 06:37:55.330144 kubelet[2865]: I0120 06:37:55.329974 2865 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 06:37:55.330528 kubelet[2865]: I0120 06:37:55.330437 2865 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 06:37:55.493599 kubelet[2865]: E0120 06:37:55.493383 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:55.504314 kubelet[2865]: E0120 06:37:55.503976 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:55.504454 kubelet[2865]: E0120 06:37:55.504319 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:55.946911 kubelet[2865]: I0120 06:37:55.946361 2865 apiserver.go:52] "Watching apiserver" Jan 20 06:37:55.979168 kubelet[2865]: I0120 06:37:55.978872 2865 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 06:37:56.112842 kubelet[2865]: E0120 06:37:56.112660 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:56.114710 kubelet[2865]: I0120 06:37:56.114630 2865 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 06:37:56.114710 kubelet[2865]: I0120 06:37:56.114701 2865 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:56.132725 kubelet[2865]: E0120 06:37:56.132605 2865 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 06:37:56.132948 kubelet[2865]: E0120 06:37:56.132904 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:56.134704 kubelet[2865]: E0120 06:37:56.134473 2865 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 20 06:37:56.135105 kubelet[2865]: E0120 06:37:56.134848 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:56.149647 kubelet[2865]: I0120 06:37:56.149474 2865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.149459034 podStartE2EDuration="4.149459034s" podCreationTimestamp="2026-01-20 06:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:37:56.133197707 +0000 UTC m=+1.339673739" watchObservedRunningTime="2026-01-20 06:37:56.149459034 +0000 UTC m=+1.355935067" Jan 20 06:37:56.163646 kubelet[2865]: I0120 06:37:56.163502 2865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.163488568 podStartE2EDuration="4.163488568s" podCreationTimestamp="2026-01-20 06:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:37:56.151218271 +0000 UTC m=+1.357694314" watchObservedRunningTime="2026-01-20 06:37:56.163488568 +0000 UTC m=+1.369964601" Jan 20 06:37:56.163646 kubelet[2865]: I0120 06:37:56.163562 2865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.163558728 podStartE2EDuration="4.163558728s" podCreationTimestamp="2026-01-20 06:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:37:56.163301118 +0000 UTC m=+1.369777152" watchObservedRunningTime="2026-01-20 06:37:56.163558728 +0000 UTC m=+1.370034761" Jan 20 06:37:57.115898 kubelet[2865]: E0120 06:37:57.115739 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:57.117608 kubelet[2865]: E0120 06:37:57.117547 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:58.310906 kubelet[2865]: E0120 06:37:58.310576 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:58.521925 kubelet[2865]: E0120 06:37:58.521760 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:58.626745 kubelet[2865]: I0120 06:37:58.626521 2865 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 06:37:58.630175 containerd[1645]: time="2026-01-20T06:37:58.629978815Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 06:37:58.631567 kubelet[2865]: I0120 06:37:58.631441 2865 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 06:37:59.122570 kubelet[2865]: E0120 06:37:59.122424 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:59.192305 systemd[1]: Created slice kubepods-besteffort-pod76a4762a_b5f9_4d48_974c_031f5ce7f9d2.slice - libcontainer container kubepods-besteffort-pod76a4762a_b5f9_4d48_974c_031f5ce7f9d2.slice. Jan 20 06:37:59.216502 kubelet[2865]: I0120 06:37:59.215919 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76a4762a-b5f9-4d48-974c-031f5ce7f9d2-lib-modules\") pod \"kube-proxy-grz58\" (UID: \"76a4762a-b5f9-4d48-974c-031f5ce7f9d2\") " pod="kube-system/kube-proxy-grz58" Jan 20 06:37:59.216948 kubelet[2865]: I0120 06:37:59.216720 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dthzz\" (UniqueName: \"kubernetes.io/projected/76a4762a-b5f9-4d48-974c-031f5ce7f9d2-kube-api-access-dthzz\") pod \"kube-proxy-grz58\" (UID: \"76a4762a-b5f9-4d48-974c-031f5ce7f9d2\") " pod="kube-system/kube-proxy-grz58" Jan 20 06:37:59.216948 kubelet[2865]: I0120 06:37:59.216850 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76a4762a-b5f9-4d48-974c-031f5ce7f9d2-kube-proxy\") pod \"kube-proxy-grz58\" (UID: \"76a4762a-b5f9-4d48-974c-031f5ce7f9d2\") " pod="kube-system/kube-proxy-grz58" Jan 20 06:37:59.216948 kubelet[2865]: I0120 06:37:59.216883 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76a4762a-b5f9-4d48-974c-031f5ce7f9d2-xtables-lock\") pod \"kube-proxy-grz58\" (UID: \"76a4762a-b5f9-4d48-974c-031f5ce7f9d2\") " pod="kube-system/kube-proxy-grz58" Jan 20 06:37:59.506912 kubelet[2865]: E0120 06:37:59.506609 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:59.516461 containerd[1645]: time="2026-01-20T06:37:59.515966146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grz58,Uid:76a4762a-b5f9-4d48-974c-031f5ce7f9d2,Namespace:kube-system,Attempt:0,}" Jan 20 06:37:59.615260 containerd[1645]: time="2026-01-20T06:37:59.614736671Z" level=info msg="connecting to shim 939944a90a63371c7536323fc2ae3e7a0611bd603a4a82ee51887cf2c08072a1" address="unix:///run/containerd/s/2523215f86e73dd192a8f6c6698adab542edd0ce518313814966f1cffb06596b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:37:59.767364 systemd[1]: Started cri-containerd-939944a90a63371c7536323fc2ae3e7a0611bd603a4a82ee51887cf2c08072a1.scope - libcontainer container 939944a90a63371c7536323fc2ae3e7a0611bd603a4a82ee51887cf2c08072a1. Jan 20 06:37:59.803199 systemd[1]: Created slice kubepods-besteffort-podead73a4b_7ba5_4f6a_9c94_b3095ae6dcf1.slice - libcontainer container kubepods-besteffort-podead73a4b_7ba5_4f6a_9c94_b3095ae6dcf1.slice. Jan 20 06:37:59.821224 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 06:37:59.821437 kernel: audit: type=1334 audit(1768891079.811:434): prog-id=131 op=LOAD Jan 20 06:37:59.811000 audit: BPF prog-id=131 op=LOAD Jan 20 06:37:59.829341 kubelet[2865]: I0120 06:37:59.829214 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66dgs\" (UniqueName: \"kubernetes.io/projected/ead73a4b-7ba5-4f6a-9c94-b3095ae6dcf1-kube-api-access-66dgs\") pod \"tigera-operator-7dcd859c48-rt8b4\" (UID: \"ead73a4b-7ba5-4f6a-9c94-b3095ae6dcf1\") " pod="tigera-operator/tigera-operator-7dcd859c48-rt8b4" Jan 20 06:37:59.829341 kubelet[2865]: I0120 06:37:59.829270 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ead73a4b-7ba5-4f6a-9c94-b3095ae6dcf1-var-lib-calico\") pod \"tigera-operator-7dcd859c48-rt8b4\" (UID: \"ead73a4b-7ba5-4f6a-9c94-b3095ae6dcf1\") " pod="tigera-operator/tigera-operator-7dcd859c48-rt8b4" Jan 20 06:37:59.812000 audit: BPF prog-id=132 op=LOAD Jan 20 06:37:59.812000 audit[2935]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2924 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:59.866696 kernel: audit: type=1334 audit(1768891079.812:435): prog-id=132 op=LOAD Jan 20 06:37:59.866950 kernel: audit: type=1300 audit(1768891079.812:435): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2924 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:59.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393934346139306136333337316337353336333233666332616533 Jan 20 06:37:59.898532 kernel: audit: type=1327 audit(1768891079.812:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393934346139306136333337316337353336333233666332616533 Jan 20 06:37:59.898637 kernel: audit: type=1334 audit(1768891079.812:436): prog-id=132 op=UNLOAD Jan 20 06:37:59.812000 audit: BPF prog-id=132 op=UNLOAD Jan 20 06:37:59.812000 audit[2935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2924 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:59.930992 kernel: audit: type=1300 audit(1768891079.812:436): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2924 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:59.933006 kernel: audit: type=1327 audit(1768891079.812:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393934346139306136333337316337353336333233666332616533 Jan 20 06:37:59.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393934346139306136333337316337353336333233666332616533 Jan 20 06:37:59.813000 audit: BPF prog-id=133 op=LOAD Jan 20 06:37:59.962663 kernel: audit: type=1334 audit(1768891079.813:437): prog-id=133 op=LOAD Jan 20 06:37:59.813000 audit[2935]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2924 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:59.971004 containerd[1645]: time="2026-01-20T06:37:59.969998661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-grz58,Uid:76a4762a-b5f9-4d48-974c-031f5ce7f9d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"939944a90a63371c7536323fc2ae3e7a0611bd603a4a82ee51887cf2c08072a1\"" Jan 20 06:37:59.971994 kubelet[2865]: E0120 06:37:59.971967 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:37:59.981406 containerd[1645]: time="2026-01-20T06:37:59.980669421Z" level=info msg="CreateContainer within sandbox \"939944a90a63371c7536323fc2ae3e7a0611bd603a4a82ee51887cf2c08072a1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 06:37:59.993553 kernel: audit: type=1300 audit(1768891079.813:437): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2924 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:59.993881 kernel: audit: type=1327 audit(1768891079.813:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393934346139306136333337316337353336333233666332616533 Jan 20 06:37:59.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393934346139306136333337316337353336333233666332616533 Jan 20 06:37:59.813000 audit: BPF prog-id=134 op=LOAD Jan 20 06:37:59.813000 audit[2935]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2924 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:59.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393934346139306136333337316337353336333233666332616533 Jan 20 06:37:59.813000 audit: BPF prog-id=134 op=UNLOAD Jan 20 06:37:59.813000 audit[2935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2924 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:59.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393934346139306136333337316337353336333233666332616533 Jan 20 06:37:59.813000 audit: BPF prog-id=133 op=UNLOAD Jan 20 06:37:59.813000 audit[2935]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2924 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:59.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393934346139306136333337316337353336333233666332616533 Jan 20 06:37:59.813000 audit: BPF prog-id=135 op=LOAD Jan 20 06:37:59.813000 audit[2935]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2924 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:37:59.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393934346139306136333337316337353336333233666332616533 Jan 20 06:38:00.027851 containerd[1645]: time="2026-01-20T06:38:00.027477295Z" level=info msg="Container dd3bd72f9535fcebc07009f77363e450bd1acbca1a8c3203f7aa20f6136d0146: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:38:00.047185 containerd[1645]: time="2026-01-20T06:38:00.046905749Z" level=info msg="CreateContainer within sandbox \"939944a90a63371c7536323fc2ae3e7a0611bd603a4a82ee51887cf2c08072a1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dd3bd72f9535fcebc07009f77363e450bd1acbca1a8c3203f7aa20f6136d0146\"" Jan 20 06:38:00.057563 containerd[1645]: time="2026-01-20T06:38:00.057520763Z" level=info msg="StartContainer for \"dd3bd72f9535fcebc07009f77363e450bd1acbca1a8c3203f7aa20f6136d0146\"" Jan 20 06:38:00.064415 containerd[1645]: time="2026-01-20T06:38:00.064237571Z" level=info msg="connecting to shim dd3bd72f9535fcebc07009f77363e450bd1acbca1a8c3203f7aa20f6136d0146" address="unix:///run/containerd/s/2523215f86e73dd192a8f6c6698adab542edd0ce518313814966f1cffb06596b" protocol=ttrpc version=3 Jan 20 06:38:00.111620 containerd[1645]: time="2026-01-20T06:38:00.111262898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rt8b4,Uid:ead73a4b-7ba5-4f6a-9c94-b3095ae6dcf1,Namespace:tigera-operator,Attempt:0,}" Jan 20 06:38:00.122516 systemd[1]: Started cri-containerd-dd3bd72f9535fcebc07009f77363e450bd1acbca1a8c3203f7aa20f6136d0146.scope - libcontainer container dd3bd72f9535fcebc07009f77363e450bd1acbca1a8c3203f7aa20f6136d0146. Jan 20 06:38:00.189429 containerd[1645]: time="2026-01-20T06:38:00.187881962Z" level=info msg="connecting to shim 00e52a5c1e5ff918b93ef5ffcfde761c74d5819a657b11a5991aeccfdf5f22cc" address="unix:///run/containerd/s/87f75ab28a386072d628ed1c31e0d3bd4c918402fb08da94bb5be7278c277282" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:38:00.226000 audit: BPF prog-id=136 op=LOAD Jan 20 06:38:00.226000 audit[2964]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2924 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464336264373266393533356663656263303730303966373733363365 Jan 20 06:38:00.226000 audit: BPF prog-id=137 op=LOAD Jan 20 06:38:00.226000 audit[2964]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2924 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464336264373266393533356663656263303730303966373733363365 Jan 20 06:38:00.226000 audit: BPF prog-id=137 op=UNLOAD Jan 20 06:38:00.226000 audit[2964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2924 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464336264373266393533356663656263303730303966373733363365 Jan 20 06:38:00.226000 audit: BPF prog-id=136 op=UNLOAD Jan 20 06:38:00.226000 audit[2964]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2924 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464336264373266393533356663656263303730303966373733363365 Jan 20 06:38:00.226000 audit: BPF prog-id=138 op=LOAD Jan 20 06:38:00.226000 audit[2964]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2924 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464336264373266393533356663656263303730303966373733363365 Jan 20 06:38:00.286511 systemd[1]: Started cri-containerd-00e52a5c1e5ff918b93ef5ffcfde761c74d5819a657b11a5991aeccfdf5f22cc.scope - libcontainer container 00e52a5c1e5ff918b93ef5ffcfde761c74d5819a657b11a5991aeccfdf5f22cc. Jan 20 06:38:00.310450 containerd[1645]: time="2026-01-20T06:38:00.309560367Z" level=info msg="StartContainer for \"dd3bd72f9535fcebc07009f77363e450bd1acbca1a8c3203f7aa20f6136d0146\" returns successfully" Jan 20 06:38:00.330000 audit: BPF prog-id=139 op=LOAD Jan 20 06:38:00.332000 audit: BPF prog-id=140 op=LOAD Jan 20 06:38:00.332000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2991 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653532613563316535666639313862393365663566666366646537 Jan 20 06:38:00.333000 audit: BPF prog-id=140 op=UNLOAD Jan 20 06:38:00.333000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2991 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653532613563316535666639313862393365663566666366646537 Jan 20 06:38:00.333000 audit: BPF prog-id=141 op=LOAD Jan 20 06:38:00.333000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2991 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653532613563316535666639313862393365663566666366646537 Jan 20 06:38:00.333000 audit: BPF prog-id=142 op=LOAD Jan 20 06:38:00.333000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2991 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.333000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653532613563316535666639313862393365663566666366646537 Jan 20 06:38:00.334000 audit: BPF prog-id=142 op=UNLOAD Jan 20 06:38:00.334000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2991 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653532613563316535666639313862393365663566666366646537 Jan 20 06:38:00.334000 audit: BPF prog-id=141 op=UNLOAD Jan 20 06:38:00.334000 audit[3003]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2991 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653532613563316535666639313862393365663566666366646537 Jan 20 06:38:00.334000 audit: BPF prog-id=143 op=LOAD Jan 20 06:38:00.334000 audit[3003]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2991 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030653532613563316535666639313862393365663566666366646537 Jan 20 06:38:00.348371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount93868057.mount: Deactivated successfully. Jan 20 06:38:00.455880 containerd[1645]: time="2026-01-20T06:38:00.454563454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rt8b4,Uid:ead73a4b-7ba5-4f6a-9c94-b3095ae6dcf1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"00e52a5c1e5ff918b93ef5ffcfde761c74d5819a657b11a5991aeccfdf5f22cc\"" Jan 20 06:38:00.460016 containerd[1645]: time="2026-01-20T06:38:00.459443129Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 06:38:00.905000 audit[3074]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:00.905000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3533f0c0 a2=0 a3=7ffc3533f0ac items=0 ppid=2978 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 06:38:00.913000 audit[3075]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:00.913000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc7aad6f0 a2=0 a3=7fffc7aad6dc items=0 ppid=2978 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.913000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 06:38:00.915000 audit[3077]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:00.915000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe84afb560 a2=0 a3=7ffe84afb54c items=0 ppid=2978 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 06:38:00.919000 audit[3078]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:00.919000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9d9755b0 a2=0 a3=7ffd9d97559c items=0 ppid=2978 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.919000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 06:38:00.946000 audit[3079]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:00.946000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb8e09bb0 a2=0 a3=7ffeb8e09b9c items=0 ppid=2978 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 06:38:00.961000 audit[3080]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:00.961000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8698eb70 a2=0 a3=7ffe8698eb5c items=0 ppid=2978 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:00.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 06:38:01.074000 audit[3082]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.074000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd18fa69f0 a2=0 a3=7ffd18fa69dc items=0 ppid=2978 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 06:38:01.087000 audit[3084]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.087000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe8f6dbb50 a2=0 a3=7ffe8f6dbb3c items=0 ppid=2978 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.087000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 20 06:38:01.109000 audit[3087]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.109000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe9b9b72f0 a2=0 a3=7ffe9b9b72dc items=0 ppid=2978 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.109000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 20 06:38:01.115000 audit[3088]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.115000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0e424340 a2=0 a3=7fff0e42432c items=0 ppid=2978 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 06:38:01.127000 audit[3090]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.127000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc35e5e3e0 a2=0 a3=7ffc35e5e3cc items=0 ppid=2978 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 06:38:01.138000 audit[3091]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.138000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9b51d850 a2=0 a3=7ffd9b51d83c items=0 ppid=2978 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.138000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 06:38:01.152000 audit[3093]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.152000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffff2a63fd0 a2=0 a3=7ffff2a63fbc items=0 ppid=2978 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 06:38:01.157428 kubelet[2865]: E0120 06:38:01.156006 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:01.194875 kubelet[2865]: I0120 06:38:01.194353 2865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-grz58" podStartSLOduration=2.194330483 podStartE2EDuration="2.194330483s" podCreationTimestamp="2026-01-20 06:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:38:01.192930504 +0000 UTC m=+6.399406537" watchObservedRunningTime="2026-01-20 06:38:01.194330483 +0000 UTC m=+6.400806516" Jan 20 06:38:01.196000 audit[3096]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.196000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffec4304d50 a2=0 a3=7ffec4304d3c items=0 ppid=2978 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.196000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 20 06:38:01.201000 audit[3097]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.201000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffe763a80 a2=0 a3=7ffffe763a6c items=0 ppid=2978 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.201000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 06:38:01.222000 audit[3099]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.222000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb09c1140 a2=0 a3=7fffb09c112c items=0 ppid=2978 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 06:38:01.230000 audit[3100]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.230000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed2e4f2b0 a2=0 a3=7ffed2e4f29c items=0 ppid=2978 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 06:38:01.248000 audit[3102]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.248000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffdeeec3c0 a2=0 a3=7fffdeeec3ac items=0 ppid=2978 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 06:38:01.273000 audit[3105]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.273000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffac5966e0 a2=0 a3=7fffac5966cc items=0 ppid=2978 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 06:38:01.292000 audit[3108]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.292000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff81b7fa60 a2=0 a3=7fff81b7fa4c items=0 ppid=2978 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 06:38:01.297000 audit[3109]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.297000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed8e62d00 a2=0 a3=7ffed8e62cec items=0 ppid=2978 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 06:38:01.307000 audit[3111]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.307000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff1ccc03a0 a2=0 a3=7fff1ccc038c items=0 ppid=2978 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.307000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:38:01.327000 audit[3114]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.327000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc7015be00 a2=0 a3=7ffc7015bdec items=0 ppid=2978 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.327000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:38:01.333000 audit[3115]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.333000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffebf03bc30 a2=0 a3=7ffebf03bc1c items=0 ppid=2978 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 06:38:01.344000 audit[3117]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:38:01.344000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe83ee4380 a2=0 a3=7ffe83ee436c items=0 ppid=2978 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 06:38:01.440000 audit[3123]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:01.440000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd6f8fae0 a2=0 a3=7ffcd6f8facc items=0 ppid=2978 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:01.468000 audit[3123]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:01.468000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcd6f8fae0 a2=0 a3=7ffcd6f8facc items=0 ppid=2978 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.468000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:01.475000 audit[3128]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.475000 audit[3128]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffff0bb85e0 a2=0 a3=7ffff0bb85cc items=0 ppid=2978 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 06:38:01.489000 audit[3130]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.489000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe626942d0 a2=0 a3=7ffe626942bc items=0 ppid=2978 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 20 06:38:01.510000 audit[3134]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.510000 audit[3134]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd38bca5d0 a2=0 a3=7ffd38bca5bc items=0 ppid=2978 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 20 06:38:01.517000 audit[3138]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.517000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe473d1360 a2=0 a3=7ffe473d134c items=0 ppid=2978 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.517000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 06:38:01.532000 audit[3140]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.532000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb0848040 a2=0 a3=7fffb084802c items=0 ppid=2978 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.532000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 06:38:01.537000 audit[3141]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.537000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff061c6140 a2=0 a3=7fff061c612c items=0 ppid=2978 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.537000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 06:38:01.550000 audit[3143]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.550000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe3b5b2960 a2=0 a3=7ffe3b5b294c items=0 ppid=2978 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 20 06:38:01.572000 audit[3146]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.572000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd12dbda30 a2=0 a3=7ffd12dbda1c items=0 ppid=2978 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.572000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 06:38:01.578000 audit[3147]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.578000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe00ff9ab0 a2=0 a3=7ffe00ff9a9c items=0 ppid=2978 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 06:38:01.590000 audit[3149]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.590000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8c019f50 a2=0 a3=7fff8c019f3c items=0 ppid=2978 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 06:38:01.596000 audit[3150]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.596000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0afd51f0 a2=0 a3=7ffc0afd51dc items=0 ppid=2978 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.596000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 06:38:01.605468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86433176.mount: Deactivated successfully. Jan 20 06:38:01.610000 audit[3152]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.610000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc79f8cc0 a2=0 a3=7fffc79f8cac items=0 ppid=2978 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.610000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 06:38:01.632000 audit[3155]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.632000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd98b96b0 a2=0 a3=7fffd98b969c items=0 ppid=2978 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.632000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 06:38:01.657000 audit[3158]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.657000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2d05bdf0 a2=0 a3=7fff2d05bddc items=0 ppid=2978 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 20 06:38:01.665000 audit[3159]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.665000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0bfe3050 a2=0 a3=7ffc0bfe303c items=0 ppid=2978 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 06:38:01.682000 audit[3161]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.682000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd8a283030 a2=0 a3=7ffd8a28301c items=0 ppid=2978 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:38:01.703000 audit[3164]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.703000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2415d430 a2=0 a3=7ffd2415d41c items=0 ppid=2978 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.703000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:38:01.709000 audit[3165]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.709000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdb3664f0 a2=0 a3=7fffdb3664dc items=0 ppid=2978 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 06:38:01.721000 audit[3167]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.721000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe11b68370 a2=0 a3=7ffe11b6835c items=0 ppid=2978 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 06:38:01.728000 audit[3168]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.728000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd470840f0 a2=0 a3=7ffd470840dc items=0 ppid=2978 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.728000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 06:38:01.741000 audit[3170]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.741000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc96bd4ab0 a2=0 a3=7ffc96bd4a9c items=0 ppid=2978 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:38:01.757000 audit[3173]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:38:01.757000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc7f7acf50 a2=0 a3=7ffc7f7acf3c items=0 ppid=2978 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.757000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:38:01.773000 audit[3175]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 06:38:01.773000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc18533430 a2=0 a3=7ffc1853341c items=0 ppid=2978 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.773000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:01.775000 audit[3175]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 06:38:01.775000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc18533430 a2=0 a3=7ffc1853341c items=0 ppid=2978 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:01.775000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:02.163112 kubelet[2865]: E0120 06:38:02.162942 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:04.705927 containerd[1645]: time="2026-01-20T06:38:04.705633903Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:04.708725 containerd[1645]: time="2026-01-20T06:38:04.708614751Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 20 06:38:04.711300 containerd[1645]: time="2026-01-20T06:38:04.711245808Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:04.715269 containerd[1645]: time="2026-01-20T06:38:04.715203227Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:04.717217 containerd[1645]: time="2026-01-20T06:38:04.715939772Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.256458543s" Jan 20 06:38:04.717217 containerd[1645]: time="2026-01-20T06:38:04.716138153Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 06:38:04.724552 containerd[1645]: time="2026-01-20T06:38:04.724410045Z" level=info msg="CreateContainer within sandbox \"00e52a5c1e5ff918b93ef5ffcfde761c74d5819a657b11a5991aeccfdf5f22cc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 06:38:04.744577 containerd[1645]: time="2026-01-20T06:38:04.744346561Z" level=info msg="Container c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:38:04.762239 containerd[1645]: time="2026-01-20T06:38:04.761692471Z" level=info msg="CreateContainer within sandbox \"00e52a5c1e5ff918b93ef5ffcfde761c74d5819a657b11a5991aeccfdf5f22cc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89\"" Jan 20 06:38:04.763640 containerd[1645]: time="2026-01-20T06:38:04.763519257Z" level=info msg="StartContainer for \"c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89\"" Jan 20 06:38:04.766165 containerd[1645]: time="2026-01-20T06:38:04.765557818Z" level=info msg="connecting to shim c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89" address="unix:///run/containerd/s/87f75ab28a386072d628ed1c31e0d3bd4c918402fb08da94bb5be7278c277282" protocol=ttrpc version=3 Jan 20 06:38:04.804548 systemd[1]: Started cri-containerd-c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89.scope - libcontainer container c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89. Jan 20 06:38:04.848167 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 20 06:38:04.848457 kernel: audit: type=1334 audit(1768891084.833:506): prog-id=144 op=LOAD Jan 20 06:38:04.833000 audit: BPF prog-id=144 op=LOAD Jan 20 06:38:04.834000 audit: BPF prog-id=145 op=LOAD Jan 20 06:38:04.834000 audit[3180]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2991 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:04.888442 kernel: audit: type=1334 audit(1768891084.834:507): prog-id=145 op=LOAD Jan 20 06:38:04.888571 kernel: audit: type=1300 audit(1768891084.834:507): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2991 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:04.895224 kernel: audit: type=1327 audit(1768891084.834:507): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383338343038633636626631653830393161376636336434306464 Jan 20 06:38:04.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383338343038633636626631653830393161376636336434306464 Jan 20 06:38:04.834000 audit: BPF prog-id=145 op=UNLOAD Jan 20 06:38:04.926590 kernel: audit: type=1334 audit(1768891084.834:508): prog-id=145 op=UNLOAD Jan 20 06:38:04.926704 kernel: audit: type=1300 audit(1768891084.834:508): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2991 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:04.834000 audit[3180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2991 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:04.928580 kubelet[2865]: E0120 06:38:04.928441 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:04.834000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383338343038633636626631653830393161376636336434306464 Jan 20 06:38:04.981883 containerd[1645]: time="2026-01-20T06:38:04.981764286Z" level=info msg="StartContainer for \"c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89\" returns successfully" Jan 20 06:38:04.992866 kernel: audit: type=1327 audit(1768891084.834:508): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383338343038633636626631653830393161376636336434306464 Jan 20 06:38:04.993398 kernel: audit: type=1334 audit(1768891084.835:509): prog-id=146 op=LOAD Jan 20 06:38:04.835000 audit: BPF prog-id=146 op=LOAD Jan 20 06:38:04.835000 audit[3180]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2991 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:05.032900 kernel: audit: type=1300 audit(1768891084.835:509): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2991 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:05.033015 kernel: audit: type=1327 audit(1768891084.835:509): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383338343038633636626631653830393161376636336434306464 Jan 20 06:38:04.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383338343038633636626631653830393161376636336434306464 Jan 20 06:38:04.835000 audit: BPF prog-id=147 op=LOAD Jan 20 06:38:04.835000 audit[3180]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2991 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:04.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383338343038633636626631653830393161376636336434306464 Jan 20 06:38:04.835000 audit: BPF prog-id=147 op=UNLOAD Jan 20 06:38:04.835000 audit[3180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2991 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:04.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383338343038633636626631653830393161376636336434306464 Jan 20 06:38:04.835000 audit: BPF prog-id=146 op=UNLOAD Jan 20 06:38:04.835000 audit[3180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2991 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:04.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383338343038633636626631653830393161376636336434306464 Jan 20 06:38:04.835000 audit: BPF prog-id=148 op=LOAD Jan 20 06:38:04.835000 audit[3180]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2991 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:04.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338383338343038633636626631653830393161376636336434306464 Jan 20 06:38:05.181169 kubelet[2865]: E0120 06:38:05.180942 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:05.214278 kubelet[2865]: I0120 06:38:05.214161 2865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-rt8b4" podStartSLOduration=1.954279487 podStartE2EDuration="6.214008211s" podCreationTimestamp="2026-01-20 06:37:59 +0000 UTC" firstStartedPulling="2026-01-20 06:38:00.458283116 +0000 UTC m=+5.664759150" lastFinishedPulling="2026-01-20 06:38:04.718011842 +0000 UTC m=+9.924487874" observedRunningTime="2026-01-20 06:38:05.213992004 +0000 UTC m=+10.420468036" watchObservedRunningTime="2026-01-20 06:38:05.214008211 +0000 UTC m=+10.420484244" Jan 20 06:38:08.524751 kubelet[2865]: E0120 06:38:08.522759 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:09.398476 systemd[1]: cri-containerd-c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89.scope: Deactivated successfully. Jan 20 06:38:09.403000 audit: BPF prog-id=144 op=UNLOAD Jan 20 06:38:09.403000 audit: BPF prog-id=148 op=UNLOAD Jan 20 06:38:09.417491 containerd[1645]: time="2026-01-20T06:38:09.416736984Z" level=info msg="received container exit event container_id:\"c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89\" id:\"c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89\" pid:3193 exit_status:1 exited_at:{seconds:1768891089 nanos:415204085}" Jan 20 06:38:09.502967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89-rootfs.mount: Deactivated successfully. Jan 20 06:38:10.221974 kubelet[2865]: I0120 06:38:10.221707 2865 scope.go:117] "RemoveContainer" containerID="c8838408c66bf1e8091a7f63d40dd2b7c1390b9f72749a8e595f998870b7fc89" Jan 20 06:38:10.243657 containerd[1645]: time="2026-01-20T06:38:10.243197210Z" level=info msg="CreateContainer within sandbox \"00e52a5c1e5ff918b93ef5ffcfde761c74d5819a657b11a5991aeccfdf5f22cc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 20 06:38:10.293751 containerd[1645]: time="2026-01-20T06:38:10.292560054Z" level=info msg="Container c957e9aa5ec80e4c6959002397daa8150577f3b42e812d963343840bec645ca7: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:38:10.319647 containerd[1645]: time="2026-01-20T06:38:10.319601281Z" level=info msg="CreateContainer within sandbox \"00e52a5c1e5ff918b93ef5ffcfde761c74d5819a657b11a5991aeccfdf5f22cc\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c957e9aa5ec80e4c6959002397daa8150577f3b42e812d963343840bec645ca7\"" Jan 20 06:38:10.323399 containerd[1645]: time="2026-01-20T06:38:10.322756867Z" level=info msg="StartContainer for \"c957e9aa5ec80e4c6959002397daa8150577f3b42e812d963343840bec645ca7\"" Jan 20 06:38:10.337947 containerd[1645]: time="2026-01-20T06:38:10.337620098Z" level=info msg="connecting to shim c957e9aa5ec80e4c6959002397daa8150577f3b42e812d963343840bec645ca7" address="unix:///run/containerd/s/87f75ab28a386072d628ed1c31e0d3bd4c918402fb08da94bb5be7278c277282" protocol=ttrpc version=3 Jan 20 06:38:10.413501 systemd[1]: Started cri-containerd-c957e9aa5ec80e4c6959002397daa8150577f3b42e812d963343840bec645ca7.scope - libcontainer container c957e9aa5ec80e4c6959002397daa8150577f3b42e812d963343840bec645ca7. Jan 20 06:38:10.459000 audit: BPF prog-id=149 op=LOAD Jan 20 06:38:10.466371 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 20 06:38:10.466459 kernel: audit: type=1334 audit(1768891090.459:516): prog-id=149 op=LOAD Jan 20 06:38:10.461000 audit: BPF prog-id=150 op=LOAD Jan 20 06:38:10.483378 kernel: audit: type=1334 audit(1768891090.461:517): prog-id=150 op=LOAD Jan 20 06:38:10.461000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2991 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:10.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353765396161356563383065346336393539303032333937646161 Jan 20 06:38:10.547520 kernel: audit: type=1300 audit(1768891090.461:517): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220238 a2=98 a3=0 items=0 ppid=2991 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:10.547653 kernel: audit: type=1327 audit(1768891090.461:517): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353765396161356563383065346336393539303032333937646161 Jan 20 06:38:10.461000 audit: BPF prog-id=150 op=UNLOAD Jan 20 06:38:10.590525 kernel: audit: type=1334 audit(1768891090.461:518): prog-id=150 op=UNLOAD Jan 20 06:38:10.590672 kernel: audit: type=1300 audit(1768891090.461:518): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2991 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:10.461000 audit[3264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2991 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:10.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353765396161356563383065346336393539303032333937646161 Jan 20 06:38:10.632353 kernel: audit: type=1327 audit(1768891090.461:518): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353765396161356563383065346336393539303032333937646161 Jan 20 06:38:10.632475 kernel: audit: type=1334 audit(1768891090.461:519): prog-id=151 op=LOAD Jan 20 06:38:10.461000 audit: BPF prog-id=151 op=LOAD Jan 20 06:38:10.461000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2991 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:10.672444 containerd[1645]: time="2026-01-20T06:38:10.666544631Z" level=info msg="StartContainer for \"c957e9aa5ec80e4c6959002397daa8150577f3b42e812d963343840bec645ca7\" returns successfully" Jan 20 06:38:10.674196 kernel: audit: type=1300 audit(1768891090.461:519): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000220488 a2=98 a3=0 items=0 ppid=2991 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:10.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353765396161356563383065346336393539303032333937646161 Jan 20 06:38:10.461000 audit: BPF prog-id=152 op=LOAD Jan 20 06:38:10.713617 kernel: audit: type=1327 audit(1768891090.461:519): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353765396161356563383065346336393539303032333937646161 Jan 20 06:38:10.461000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000220218 a2=98 a3=0 items=0 ppid=2991 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:10.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353765396161356563383065346336393539303032333937646161 Jan 20 06:38:10.461000 audit: BPF prog-id=152 op=UNLOAD Jan 20 06:38:10.461000 audit[3264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2991 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:10.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353765396161356563383065346336393539303032333937646161 Jan 20 06:38:10.461000 audit: BPF prog-id=151 op=UNLOAD Jan 20 06:38:10.461000 audit[3264]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2991 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:10.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353765396161356563383065346336393539303032333937646161 Jan 20 06:38:10.461000 audit: BPF prog-id=153 op=LOAD Jan 20 06:38:10.461000 audit[3264]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002206e8 a2=98 a3=0 items=0 ppid=2991 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:10.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339353765396161356563383065346336393539303032333937646161 Jan 20 06:38:11.460590 sudo[1858]: pam_unix(sudo:session): session closed for user root Jan 20 06:38:11.459000 audit[1858]: USER_END pid=1858 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:38:11.460000 audit[1858]: CRED_DISP pid=1858 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:38:11.472303 sshd[1857]: Connection closed by 10.0.0.1 port 36620 Jan 20 06:38:11.473994 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Jan 20 06:38:11.479000 audit[1853]: USER_END pid=1853 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:38:11.479000 audit[1853]: CRED_DISP pid=1853 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:38:11.485309 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:36620.service: Deactivated successfully. Jan 20 06:38:11.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.35:22-10.0.0.1:36620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:38:11.490441 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 06:38:11.491273 systemd[1]: session-8.scope: Consumed 8.429s CPU time, 216.1M memory peak. Jan 20 06:38:11.497649 systemd-logind[1623]: Session 8 logged out. Waiting for processes to exit. Jan 20 06:38:11.501534 systemd-logind[1623]: Removed session 8. Jan 20 06:38:15.827975 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 20 06:38:15.828456 kernel: audit: type=1325 audit(1768891095.795:529): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:15.795000 audit[3324]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:15.795000 audit[3324]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7641f220 a2=0 a3=7fff7641f20c items=0 ppid=2978 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:15.795000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:15.903500 kernel: audit: type=1300 audit(1768891095.795:529): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7641f220 a2=0 a3=7fff7641f20c items=0 ppid=2978 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:15.903600 kernel: audit: type=1327 audit(1768891095.795:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:15.886000 audit[3324]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:15.925459 kernel: audit: type=1325 audit(1768891095.886:530): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:15.925593 kernel: audit: type=1300 audit(1768891095.886:530): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7641f220 a2=0 a3=0 items=0 ppid=2978 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:15.886000 audit[3324]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7641f220 a2=0 a3=0 items=0 ppid=2978 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:15.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:15.990405 kernel: audit: type=1327 audit(1768891095.886:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:16.041000 audit[3326]: NETFILTER_CFG table=filter:107 family=2 entries=15 op=nft_register_rule pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:16.067545 kernel: audit: type=1325 audit(1768891096.041:531): table=filter:107 family=2 entries=15 op=nft_register_rule pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:16.041000 audit[3326]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb8b72f70 a2=0 a3=7fffb8b72f5c items=0 ppid=2978 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:16.041000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:16.141322 kernel: audit: type=1300 audit(1768891096.041:531): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb8b72f70 a2=0 a3=7fffb8b72f5c items=0 ppid=2978 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:16.141428 kernel: audit: type=1327 audit(1768891096.041:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:16.074000 audit[3326]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:16.161724 kernel: audit: type=1325 audit(1768891096.074:532): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:16.074000 audit[3326]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb8b72f70 a2=0 a3=0 items=0 ppid=2978 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:16.074000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:21.636000 audit[3329]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:21.645520 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 20 06:38:21.645737 kernel: audit: type=1325 audit(1768891101.636:533): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:21.636000 audit[3329]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff984e20f0 a2=0 a3=7fff984e20dc items=0 ppid=2978 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:21.722343 kernel: audit: type=1300 audit(1768891101.636:533): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff984e20f0 a2=0 a3=7fff984e20dc items=0 ppid=2978 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:21.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:21.753595 kernel: audit: type=1327 audit(1768891101.636:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:21.672000 audit[3329]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:21.672000 audit[3329]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff984e20f0 a2=0 a3=0 items=0 ppid=2978 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:21.821570 kernel: audit: type=1325 audit(1768891101.672:534): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3329 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:21.821719 kernel: audit: type=1300 audit(1768891101.672:534): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff984e20f0 a2=0 a3=0 items=0 ppid=2978 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:21.821769 kernel: audit: type=1327 audit(1768891101.672:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:21.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:21.862000 audit[3331]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:21.941202 kernel: audit: type=1325 audit(1768891101.862:535): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:21.941491 kernel: audit: type=1300 audit(1768891101.862:535): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdfe0335c0 a2=0 a3=7ffdfe0335ac items=0 ppid=2978 pid=3331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:21.862000 audit[3331]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdfe0335c0 a2=0 a3=7ffdfe0335ac items=0 ppid=2978 pid=3331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:21.862000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:21.965249 kernel: audit: type=1327 audit(1768891101.862:535): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:21.969000 audit[3331]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:21.995568 kernel: audit: type=1325 audit(1768891101.969:536): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:21.969000 audit[3331]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdfe0335c0 a2=0 a3=0 items=0 ppid=2978 pid=3331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:21.969000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:23.032000 audit[3333]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3333 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:23.032000 audit[3333]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc80c943e0 a2=0 a3=7ffc80c943cc items=0 ppid=2978 pid=3333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:23.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:23.037000 audit[3333]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3333 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:23.037000 audit[3333]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc80c943e0 a2=0 a3=0 items=0 ppid=2978 pid=3333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:23.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:24.911000 audit[3335]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:24.911000 audit[3335]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd9431e760 a2=0 a3=7ffd9431e74c items=0 ppid=2978 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:24.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:24.919000 audit[3335]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:24.919000 audit[3335]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9431e760 a2=0 a3=0 items=0 ppid=2978 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:24.919000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:24.974415 systemd[1]: Created slice kubepods-besteffort-pod5094b7c9_729f_4616_b1c2_30c8ab078070.slice - libcontainer container kubepods-besteffort-pod5094b7c9_729f_4616_b1c2_30c8ab078070.slice. Jan 20 06:38:25.108223 kubelet[2865]: I0120 06:38:25.107280 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbwl\" (UniqueName: \"kubernetes.io/projected/5094b7c9-729f-4616-b1c2-30c8ab078070-kube-api-access-scbwl\") pod \"calico-typha-76f67f64d8-stpp6\" (UID: \"5094b7c9-729f-4616-b1c2-30c8ab078070\") " pod="calico-system/calico-typha-76f67f64d8-stpp6" Jan 20 06:38:25.112686 kubelet[2865]: I0120 06:38:25.112295 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5094b7c9-729f-4616-b1c2-30c8ab078070-tigera-ca-bundle\") pod \"calico-typha-76f67f64d8-stpp6\" (UID: \"5094b7c9-729f-4616-b1c2-30c8ab078070\") " pod="calico-system/calico-typha-76f67f64d8-stpp6" Jan 20 06:38:25.112686 kubelet[2865]: I0120 06:38:25.112350 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5094b7c9-729f-4616-b1c2-30c8ab078070-typha-certs\") pod \"calico-typha-76f67f64d8-stpp6\" (UID: \"5094b7c9-729f-4616-b1c2-30c8ab078070\") " pod="calico-system/calico-typha-76f67f64d8-stpp6" Jan 20 06:38:25.126000 audit[3337]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:25.126000 audit[3337]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff073e0b60 a2=0 a3=7fff073e0b4c items=0 ppid=2978 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:25.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:25.136000 audit[3337]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:25.136000 audit[3337]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff073e0b60 a2=0 a3=0 items=0 ppid=2978 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:25.136000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:25.290386 kubelet[2865]: E0120 06:38:25.287876 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:25.299882 containerd[1645]: time="2026-01-20T06:38:25.299827393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76f67f64d8-stpp6,Uid:5094b7c9-729f-4616-b1c2-30c8ab078070,Namespace:calico-system,Attempt:0,}" Jan 20 06:38:25.439425 systemd[1]: Created slice kubepods-besteffort-pod0a1e1c72_e49a_439c_b6f1_4faf5523b350.slice - libcontainer container kubepods-besteffort-pod0a1e1c72_e49a_439c_b6f1_4faf5523b350.slice. Jan 20 06:38:25.469159 containerd[1645]: time="2026-01-20T06:38:25.467377967Z" level=info msg="connecting to shim 87642a03062589f5d28a86c0202b7b17c2f081e65a42c3eb3482c7bd091d5ada" address="unix:///run/containerd/s/82f1836c2510de886be1eade27e61d6812a2bbaf4c5de3d7c536c92a6052f1cb" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:38:25.526245 kubelet[2865]: I0120 06:38:25.525722 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a1e1c72-e49a-439c-b6f1-4faf5523b350-tigera-ca-bundle\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.526245 kubelet[2865]: I0120 06:38:25.525791 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a1e1c72-e49a-439c-b6f1-4faf5523b350-xtables-lock\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.526245 kubelet[2865]: I0120 06:38:25.525830 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0a1e1c72-e49a-439c-b6f1-4faf5523b350-cni-log-dir\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.526245 kubelet[2865]: I0120 06:38:25.525858 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0a1e1c72-e49a-439c-b6f1-4faf5523b350-cni-bin-dir\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.526245 kubelet[2865]: I0120 06:38:25.525883 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a1e1c72-e49a-439c-b6f1-4faf5523b350-var-lib-calico\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.530008 kubelet[2865]: I0120 06:38:25.526579 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0a1e1c72-e49a-439c-b6f1-4faf5523b350-flexvol-driver-host\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.531391 kubelet[2865]: I0120 06:38:25.531372 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a1e1c72-e49a-439c-b6f1-4faf5523b350-lib-modules\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.531483 kubelet[2865]: I0120 06:38:25.531469 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0a1e1c72-e49a-439c-b6f1-4faf5523b350-policysync\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.531561 kubelet[2865]: I0120 06:38:25.531547 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0a1e1c72-e49a-439c-b6f1-4faf5523b350-var-run-calico\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.531674 kubelet[2865]: I0120 06:38:25.531652 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr6pg\" (UniqueName: \"kubernetes.io/projected/0a1e1c72-e49a-439c-b6f1-4faf5523b350-kube-api-access-qr6pg\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.532705 kubelet[2865]: I0120 06:38:25.531747 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0a1e1c72-e49a-439c-b6f1-4faf5523b350-cni-net-dir\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.532853 kubelet[2865]: I0120 06:38:25.532823 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0a1e1c72-e49a-439c-b6f1-4faf5523b350-node-certs\") pod \"calico-node-68d9v\" (UID: \"0a1e1c72-e49a-439c-b6f1-4faf5523b350\") " pod="calico-system/calico-node-68d9v" Jan 20 06:38:25.599365 kubelet[2865]: E0120 06:38:25.599313 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:38:25.634782 kubelet[2865]: I0120 06:38:25.634735 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/67f738e9-ce9e-42e1-a454-66084ff2d3ad-socket-dir\") pod \"csi-node-driver-kp869\" (UID: \"67f738e9-ce9e-42e1-a454-66084ff2d3ad\") " pod="calico-system/csi-node-driver-kp869" Jan 20 06:38:25.638324 kubelet[2865]: I0120 06:38:25.637251 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk6rc\" (UniqueName: \"kubernetes.io/projected/67f738e9-ce9e-42e1-a454-66084ff2d3ad-kube-api-access-jk6rc\") pod \"csi-node-driver-kp869\" (UID: \"67f738e9-ce9e-42e1-a454-66084ff2d3ad\") " pod="calico-system/csi-node-driver-kp869" Jan 20 06:38:25.638324 kubelet[2865]: I0120 06:38:25.637323 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/67f738e9-ce9e-42e1-a454-66084ff2d3ad-registration-dir\") pod \"csi-node-driver-kp869\" (UID: \"67f738e9-ce9e-42e1-a454-66084ff2d3ad\") " pod="calico-system/csi-node-driver-kp869" Jan 20 06:38:25.638324 kubelet[2865]: I0120 06:38:25.637399 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/67f738e9-ce9e-42e1-a454-66084ff2d3ad-varrun\") pod \"csi-node-driver-kp869\" (UID: \"67f738e9-ce9e-42e1-a454-66084ff2d3ad\") " pod="calico-system/csi-node-driver-kp869" Jan 20 06:38:25.638324 kubelet[2865]: I0120 06:38:25.637492 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67f738e9-ce9e-42e1-a454-66084ff2d3ad-kubelet-dir\") pod \"csi-node-driver-kp869\" (UID: \"67f738e9-ce9e-42e1-a454-66084ff2d3ad\") " pod="calico-system/csi-node-driver-kp869" Jan 20 06:38:25.657661 kubelet[2865]: E0120 06:38:25.657618 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.657850 kubelet[2865]: W0120 06:38:25.657824 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.658328 kubelet[2865]: E0120 06:38:25.658307 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.701644 kubelet[2865]: E0120 06:38:25.701538 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.701644 kubelet[2865]: W0120 06:38:25.701566 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.701644 kubelet[2865]: E0120 06:38:25.701592 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.728219 kubelet[2865]: E0120 06:38:25.726728 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.728219 kubelet[2865]: W0120 06:38:25.726766 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.728219 kubelet[2865]: E0120 06:38:25.726797 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.741796 kubelet[2865]: E0120 06:38:25.741568 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.741796 kubelet[2865]: W0120 06:38:25.741600 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.741796 kubelet[2865]: E0120 06:38:25.741627 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.744717 kubelet[2865]: E0120 06:38:25.744522 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.744717 kubelet[2865]: W0120 06:38:25.744649 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.744717 kubelet[2865]: E0120 06:38:25.744683 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.748274 kubelet[2865]: E0120 06:38:25.745883 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.748274 kubelet[2865]: W0120 06:38:25.746015 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.748274 kubelet[2865]: E0120 06:38:25.746251 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.748274 kubelet[2865]: E0120 06:38:25.746502 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.748274 kubelet[2865]: W0120 06:38:25.746513 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.748274 kubelet[2865]: E0120 06:38:25.746566 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.748274 kubelet[2865]: E0120 06:38:25.746769 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.748274 kubelet[2865]: W0120 06:38:25.747502 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.748274 kubelet[2865]: E0120 06:38:25.747523 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.749863 kubelet[2865]: E0120 06:38:25.749399 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.749863 kubelet[2865]: W0120 06:38:25.749501 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.749863 kubelet[2865]: E0120 06:38:25.749561 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.749863 kubelet[2865]: E0120 06:38:25.749762 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.749863 kubelet[2865]: W0120 06:38:25.749773 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.749863 kubelet[2865]: E0120 06:38:25.749832 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.750756 kubelet[2865]: E0120 06:38:25.750381 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.750756 kubelet[2865]: W0120 06:38:25.750393 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.750756 kubelet[2865]: E0120 06:38:25.750444 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.750756 kubelet[2865]: E0120 06:38:25.750643 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.750756 kubelet[2865]: W0120 06:38:25.750655 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.750756 kubelet[2865]: E0120 06:38:25.750709 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.751695 kubelet[2865]: E0120 06:38:25.750882 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.751695 kubelet[2865]: W0120 06:38:25.750892 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.751695 kubelet[2865]: E0120 06:38:25.751353 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.751816 kubelet[2865]: E0120 06:38:25.751804 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.751816 kubelet[2865]: W0120 06:38:25.751815 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.752013 kubelet[2865]: E0120 06:38:25.751869 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.752586 kubelet[2865]: E0120 06:38:25.752377 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.752586 kubelet[2865]: W0120 06:38:25.752390 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.752673 kubelet[2865]: E0120 06:38:25.752635 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.752673 kubelet[2865]: W0120 06:38:25.752645 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.755378 kubelet[2865]: E0120 06:38:25.754395 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.755378 kubelet[2865]: E0120 06:38:25.754464 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.758521 kubelet[2865]: E0120 06:38:25.758242 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.758521 kubelet[2865]: W0120 06:38:25.758302 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.758521 kubelet[2865]: E0120 06:38:25.758333 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.761737 kubelet[2865]: E0120 06:38:25.761221 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.761737 kubelet[2865]: W0120 06:38:25.761237 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.762622 kubelet[2865]: E0120 06:38:25.762236 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.762622 kubelet[2865]: W0120 06:38:25.762266 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.762622 kubelet[2865]: E0120 06:38:25.762584 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:25.766261 kubelet[2865]: E0120 06:38:25.763378 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.766261 kubelet[2865]: W0120 06:38:25.763396 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.766261 kubelet[2865]: E0120 06:38:25.763631 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.766261 kubelet[2865]: W0120 06:38:25.763641 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.766261 kubelet[2865]: E0120 06:38:25.764019 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.766261 kubelet[2865]: W0120 06:38:25.764249 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.766261 kubelet[2865]: E0120 06:38:25.764268 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.766261 kubelet[2865]: E0120 06:38:25.764313 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.766261 kubelet[2865]: E0120 06:38:25.764330 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.766261 kubelet[2865]: E0120 06:38:25.765529 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.766537 kubelet[2865]: E0120 06:38:25.765552 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.766537 kubelet[2865]: E0120 06:38:25.765855 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.766537 kubelet[2865]: W0120 06:38:25.765873 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.766537 kubelet[2865]: E0120 06:38:25.765888 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.766537 kubelet[2865]: E0120 06:38:25.766378 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.766537 kubelet[2865]: W0120 06:38:25.766387 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.766537 kubelet[2865]: E0120 06:38:25.766396 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.773422 kubelet[2865]: E0120 06:38:25.767397 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.773422 kubelet[2865]: W0120 06:38:25.767512 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.773422 kubelet[2865]: E0120 06:38:25.767524 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.773422 kubelet[2865]: E0120 06:38:25.768452 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.773422 kubelet[2865]: W0120 06:38:25.768679 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.773422 kubelet[2865]: E0120 06:38:25.770413 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.773644 containerd[1645]: time="2026-01-20T06:38:25.768716940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68d9v,Uid:0a1e1c72-e49a-439c-b6f1-4faf5523b350,Namespace:calico-system,Attempt:0,}" Jan 20 06:38:25.774497 kubelet[2865]: E0120 06:38:25.774018 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.774497 kubelet[2865]: W0120 06:38:25.774477 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.774497 kubelet[2865]: E0120 06:38:25.774495 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.776798 kubelet[2865]: E0120 06:38:25.776664 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.776880 kubelet[2865]: W0120 06:38:25.776799 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.776880 kubelet[2865]: E0120 06:38:25.776828 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.781873 systemd[1]: Started cri-containerd-87642a03062589f5d28a86c0202b7b17c2f081e65a42c3eb3482c7bd091d5ada.scope - libcontainer container 87642a03062589f5d28a86c0202b7b17c2f081e65a42c3eb3482c7bd091d5ada. Jan 20 06:38:25.852700 kubelet[2865]: E0120 06:38:25.848616 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:25.852700 kubelet[2865]: W0120 06:38:25.848754 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:25.852700 kubelet[2865]: E0120 06:38:25.848779 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:25.976774 containerd[1645]: time="2026-01-20T06:38:25.975649986Z" level=info msg="connecting to shim f01573ff7c0655051a7aacc50afdf777b731c4eac69bef67ec8309602cf7e9e3" address="unix:///run/containerd/s/e86dd04ad0392cb7d8097839e6254d1667a726e89a4b03b62cba0101618aeaf3" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:38:26.026000 audit: BPF prog-id=154 op=LOAD Jan 20 06:38:26.032000 audit: BPF prog-id=155 op=LOAD Jan 20 06:38:26.032000 audit[3361]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3349 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363432613033303632353839663564323861383663303230326237 Jan 20 06:38:26.032000 audit: BPF prog-id=155 op=UNLOAD Jan 20 06:38:26.032000 audit[3361]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363432613033303632353839663564323861383663303230326237 Jan 20 06:38:26.034000 audit: BPF prog-id=156 op=LOAD Jan 20 06:38:26.034000 audit[3361]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3349 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363432613033303632353839663564323861383663303230326237 Jan 20 06:38:26.035000 audit: BPF prog-id=157 op=LOAD Jan 20 06:38:26.035000 audit[3361]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3349 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363432613033303632353839663564323861383663303230326237 Jan 20 06:38:26.036000 audit: BPF prog-id=157 op=UNLOAD Jan 20 06:38:26.036000 audit[3361]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363432613033303632353839663564323861383663303230326237 Jan 20 06:38:26.036000 audit: BPF prog-id=156 op=UNLOAD Jan 20 06:38:26.036000 audit[3361]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363432613033303632353839663564323861383663303230326237 Jan 20 06:38:26.037000 audit: BPF prog-id=158 op=LOAD Jan 20 06:38:26.037000 audit[3361]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3349 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837363432613033303632353839663564323861383663303230326237 Jan 20 06:38:26.212706 systemd[1]: Started cri-containerd-f01573ff7c0655051a7aacc50afdf777b731c4eac69bef67ec8309602cf7e9e3.scope - libcontainer container f01573ff7c0655051a7aacc50afdf777b731c4eac69bef67ec8309602cf7e9e3. Jan 20 06:38:26.256000 audit[3446]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:26.256000 audit[3446]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd3b696da0 a2=0 a3=7ffd3b696d8c items=0 ppid=2978 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:26.287000 audit[3446]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:26.287000 audit[3446]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3b696da0 a2=0 a3=0 items=0 ppid=2978 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.287000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:26.309692 containerd[1645]: time="2026-01-20T06:38:26.309424328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76f67f64d8-stpp6,Uid:5094b7c9-729f-4616-b1c2-30c8ab078070,Namespace:calico-system,Attempt:0,} returns sandbox id \"87642a03062589f5d28a86c0202b7b17c2f081e65a42c3eb3482c7bd091d5ada\"" Jan 20 06:38:26.329519 kubelet[2865]: E0120 06:38:26.328609 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:26.337855 containerd[1645]: time="2026-01-20T06:38:26.336621912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 06:38:26.389000 audit: BPF prog-id=159 op=LOAD Jan 20 06:38:26.390000 audit: BPF prog-id=160 op=LOAD Jan 20 06:38:26.390000 audit[3433]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d6238 a2=98 a3=0 items=0 ppid=3422 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630313537336666376330363535303531613761616363353061666466 Jan 20 06:38:26.390000 audit: BPF prog-id=160 op=UNLOAD Jan 20 06:38:26.390000 audit[3433]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630313537336666376330363535303531613761616363353061666466 Jan 20 06:38:26.392000 audit: BPF prog-id=161 op=LOAD Jan 20 06:38:26.392000 audit[3433]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d6488 a2=98 a3=0 items=0 ppid=3422 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630313537336666376330363535303531613761616363353061666466 Jan 20 06:38:26.392000 audit: BPF prog-id=162 op=LOAD Jan 20 06:38:26.392000 audit[3433]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001d6218 a2=98 a3=0 items=0 ppid=3422 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630313537336666376330363535303531613761616363353061666466 Jan 20 06:38:26.392000 audit: BPF prog-id=162 op=UNLOAD Jan 20 06:38:26.392000 audit[3433]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630313537336666376330363535303531613761616363353061666466 Jan 20 06:38:26.392000 audit: BPF prog-id=161 op=UNLOAD Jan 20 06:38:26.392000 audit[3433]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630313537336666376330363535303531613761616363353061666466 Jan 20 06:38:26.392000 audit: BPF prog-id=163 op=LOAD Jan 20 06:38:26.392000 audit[3433]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d66e8 a2=98 a3=0 items=0 ppid=3422 pid=3433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:26.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630313537336666376330363535303531613761616363353061666466 Jan 20 06:38:26.548210 containerd[1645]: time="2026-01-20T06:38:26.547576583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68d9v,Uid:0a1e1c72-e49a-439c-b6f1-4faf5523b350,Namespace:calico-system,Attempt:0,} returns sandbox id \"f01573ff7c0655051a7aacc50afdf777b731c4eac69bef67ec8309602cf7e9e3\"" Jan 20 06:38:26.553367 kubelet[2865]: E0120 06:38:26.553340 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:27.380572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount519047875.mount: Deactivated successfully. Jan 20 06:38:28.070429 kubelet[2865]: E0120 06:38:28.070192 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:38:30.072601 kubelet[2865]: E0120 06:38:30.072367 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:38:30.377610 containerd[1645]: time="2026-01-20T06:38:30.376886561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:30.380844 containerd[1645]: time="2026-01-20T06:38:30.380561587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33736633" Jan 20 06:38:30.383314 containerd[1645]: time="2026-01-20T06:38:30.383288024Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:30.389678 containerd[1645]: time="2026-01-20T06:38:30.389510710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:30.392358 containerd[1645]: time="2026-01-20T06:38:30.391848785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.055181019s" Jan 20 06:38:30.392358 containerd[1645]: time="2026-01-20T06:38:30.392284205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 06:38:30.398185 containerd[1645]: time="2026-01-20T06:38:30.396728666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 06:38:30.436564 containerd[1645]: time="2026-01-20T06:38:30.436387572Z" level=info msg="CreateContainer within sandbox \"87642a03062589f5d28a86c0202b7b17c2f081e65a42c3eb3482c7bd091d5ada\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 06:38:30.466220 containerd[1645]: time="2026-01-20T06:38:30.464793014Z" level=info msg="Container 3b56750b1ac9e020d7d06c98a94d0732ae5c7e609b87d358d7a072159ea67ba7: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:38:30.471415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185091964.mount: Deactivated successfully. Jan 20 06:38:30.497859 containerd[1645]: time="2026-01-20T06:38:30.497579374Z" level=info msg="CreateContainer within sandbox \"87642a03062589f5d28a86c0202b7b17c2f081e65a42c3eb3482c7bd091d5ada\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3b56750b1ac9e020d7d06c98a94d0732ae5c7e609b87d358d7a072159ea67ba7\"" Jan 20 06:38:30.503495 containerd[1645]: time="2026-01-20T06:38:30.503348818Z" level=info msg="StartContainer for \"3b56750b1ac9e020d7d06c98a94d0732ae5c7e609b87d358d7a072159ea67ba7\"" Jan 20 06:38:30.506339 containerd[1645]: time="2026-01-20T06:38:30.505746294Z" level=info msg="connecting to shim 3b56750b1ac9e020d7d06c98a94d0732ae5c7e609b87d358d7a072159ea67ba7" address="unix:///run/containerd/s/82f1836c2510de886be1eade27e61d6812a2bbaf4c5de3d7c536c92a6052f1cb" protocol=ttrpc version=3 Jan 20 06:38:30.577754 systemd[1]: Started cri-containerd-3b56750b1ac9e020d7d06c98a94d0732ae5c7e609b87d358d7a072159ea67ba7.scope - libcontainer container 3b56750b1ac9e020d7d06c98a94d0732ae5c7e609b87d358d7a072159ea67ba7. Jan 20 06:38:30.637000 audit: BPF prog-id=164 op=LOAD Jan 20 06:38:30.647471 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 20 06:38:30.647587 kernel: audit: type=1334 audit(1768891110.637:561): prog-id=164 op=LOAD Jan 20 06:38:30.641000 audit: BPF prog-id=165 op=LOAD Jan 20 06:38:30.666416 kernel: audit: type=1334 audit(1768891110.641:562): prog-id=165 op=LOAD Jan 20 06:38:30.641000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3349 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:30.705179 kernel: audit: type=1300 audit(1768891110.641:562): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3349 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:30.705393 kernel: audit: type=1327 audit(1768891110.641:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353637353062316163396530323064376430366339386139346430 Jan 20 06:38:30.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353637353062316163396530323064376430366339386139346430 Jan 20 06:38:30.754485 kernel: audit: type=1334 audit(1768891110.642:563): prog-id=165 op=UNLOAD Jan 20 06:38:30.642000 audit: BPF prog-id=165 op=UNLOAD Jan 20 06:38:30.642000 audit[3479]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:30.791984 kernel: audit: type=1300 audit(1768891110.642:563): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:30.793597 kernel: audit: type=1327 audit(1768891110.642:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353637353062316163396530323064376430366339386139346430 Jan 20 06:38:30.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353637353062316163396530323064376430366339386139346430 Jan 20 06:38:30.642000 audit: BPF prog-id=166 op=LOAD Jan 20 06:38:30.844479 kernel: audit: type=1334 audit(1768891110.642:564): prog-id=166 op=LOAD Jan 20 06:38:30.642000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3349 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:30.870869 containerd[1645]: time="2026-01-20T06:38:30.870773019Z" level=info msg="StartContainer for \"3b56750b1ac9e020d7d06c98a94d0732ae5c7e609b87d358d7a072159ea67ba7\" returns successfully" Jan 20 06:38:30.890674 kernel: audit: type=1300 audit(1768891110.642:564): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3349 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:30.890821 kernel: audit: type=1327 audit(1768891110.642:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353637353062316163396530323064376430366339386139346430 Jan 20 06:38:30.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353637353062316163396530323064376430366339386139346430 Jan 20 06:38:30.642000 audit: BPF prog-id=167 op=LOAD Jan 20 06:38:30.642000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3349 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:30.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353637353062316163396530323064376430366339386139346430 Jan 20 06:38:30.643000 audit: BPF prog-id=167 op=UNLOAD Jan 20 06:38:30.643000 audit[3479]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:30.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353637353062316163396530323064376430366339386139346430 Jan 20 06:38:30.643000 audit: BPF prog-id=166 op=UNLOAD Jan 20 06:38:30.643000 audit[3479]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3349 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:30.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353637353062316163396530323064376430366339386139346430 Jan 20 06:38:30.643000 audit: BPF prog-id=168 op=LOAD Jan 20 06:38:30.643000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3349 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:30.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362353637353062316163396530323064376430366339386139346430 Jan 20 06:38:31.398845 kubelet[2865]: E0120 06:38:31.398811 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:31.416233 kubelet[2865]: E0120 06:38:31.414654 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.417346 kubelet[2865]: W0120 06:38:31.417326 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.417509 kubelet[2865]: E0120 06:38:31.417432 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.420653 kubelet[2865]: E0120 06:38:31.420639 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.420767 kubelet[2865]: W0120 06:38:31.420748 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.420849 kubelet[2865]: E0120 06:38:31.420833 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.424211 kubelet[2865]: E0120 06:38:31.423372 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.424211 kubelet[2865]: W0120 06:38:31.423384 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.424211 kubelet[2865]: E0120 06:38:31.423395 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.430810 kubelet[2865]: E0120 06:38:31.430432 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.430810 kubelet[2865]: W0120 06:38:31.430448 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.430810 kubelet[2865]: E0120 06:38:31.430464 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.431621 kubelet[2865]: E0120 06:38:31.431608 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.431681 kubelet[2865]: W0120 06:38:31.431669 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.431761 kubelet[2865]: E0120 06:38:31.431746 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.435357 kubelet[2865]: E0120 06:38:31.435189 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.435357 kubelet[2865]: W0120 06:38:31.435202 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.435357 kubelet[2865]: E0120 06:38:31.435212 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.436841 kubelet[2865]: E0120 06:38:31.436824 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.437263 kubelet[2865]: W0120 06:38:31.437246 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.437521 kubelet[2865]: E0120 06:38:31.437338 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.439888 kubelet[2865]: E0120 06:38:31.439338 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.439888 kubelet[2865]: W0120 06:38:31.439355 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.439888 kubelet[2865]: E0120 06:38:31.439370 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.444255 kubelet[2865]: E0120 06:38:31.443469 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.444255 kubelet[2865]: W0120 06:38:31.443482 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.444255 kubelet[2865]: E0120 06:38:31.443495 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.445761 kubelet[2865]: E0120 06:38:31.445260 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.445761 kubelet[2865]: W0120 06:38:31.445276 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.445761 kubelet[2865]: E0120 06:38:31.445289 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.447773 kubelet[2865]: E0120 06:38:31.447705 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.447773 kubelet[2865]: W0120 06:38:31.447718 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.447773 kubelet[2865]: E0120 06:38:31.447728 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.450604 kubelet[2865]: E0120 06:38:31.450579 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.451001 kubelet[2865]: W0120 06:38:31.450679 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.451001 kubelet[2865]: E0120 06:38:31.450701 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.453341 kubelet[2865]: E0120 06:38:31.452648 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.453341 kubelet[2865]: W0120 06:38:31.452664 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.453341 kubelet[2865]: E0120 06:38:31.452678 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.454019 kubelet[2865]: E0120 06:38:31.453889 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.456419 kubelet[2865]: W0120 06:38:31.456281 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.456419 kubelet[2865]: E0120 06:38:31.456303 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.461670 kubelet[2865]: E0120 06:38:31.461589 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.461670 kubelet[2865]: W0120 06:38:31.461604 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.461670 kubelet[2865]: E0120 06:38:31.461616 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.521336 containerd[1645]: time="2026-01-20T06:38:31.520623969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:31.526342 containerd[1645]: time="2026-01-20T06:38:31.525781516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 20 06:38:31.536213 containerd[1645]: time="2026-01-20T06:38:31.535454055Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:31.544889 containerd[1645]: time="2026-01-20T06:38:31.542465288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:31.545795 kubelet[2865]: E0120 06:38:31.544335 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.545795 kubelet[2865]: W0120 06:38:31.544362 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.545795 kubelet[2865]: E0120 06:38:31.544666 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.548867 containerd[1645]: time="2026-01-20T06:38:31.548835388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.152071627s" Jan 20 06:38:31.550219 kubelet[2865]: E0120 06:38:31.550202 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.551304 kubelet[2865]: W0120 06:38:31.551220 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.551304 kubelet[2865]: E0120 06:38:31.551248 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.553159 containerd[1645]: time="2026-01-20T06:38:31.552332930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 06:38:31.553878 kubelet[2865]: E0120 06:38:31.553847 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.554768 kubelet[2865]: W0120 06:38:31.554306 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.554882 kubelet[2865]: E0120 06:38:31.554864 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.557347 kubelet[2865]: E0120 06:38:31.556856 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.562248 kubelet[2865]: W0120 06:38:31.562216 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.563655 kubelet[2865]: E0120 06:38:31.562833 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.565188 kubelet[2865]: E0120 06:38:31.564442 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.567591 kubelet[2865]: W0120 06:38:31.567567 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.567725 kubelet[2865]: E0120 06:38:31.567695 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.568778 kubelet[2865]: E0120 06:38:31.568763 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.568854 kubelet[2865]: W0120 06:38:31.568840 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.569749 kubelet[2865]: E0120 06:38:31.569729 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.571246 kubelet[2865]: E0120 06:38:31.571230 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.571347 kubelet[2865]: W0120 06:38:31.571333 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.573225 containerd[1645]: time="2026-01-20T06:38:31.572251626Z" level=info msg="CreateContainer within sandbox \"f01573ff7c0655051a7aacc50afdf777b731c4eac69bef67ec8309602cf7e9e3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 06:38:31.573675 kubelet[2865]: E0120 06:38:31.573577 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.573675 kubelet[2865]: E0120 06:38:31.573652 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.573675 kubelet[2865]: W0120 06:38:31.573660 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.574013 kubelet[2865]: E0120 06:38:31.573997 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.576319 kubelet[2865]: E0120 06:38:31.576302 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.577633 kubelet[2865]: W0120 06:38:31.577225 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.577633 kubelet[2865]: E0120 06:38:31.577247 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.577885 kubelet[2865]: E0120 06:38:31.577866 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.578685 kubelet[2865]: W0120 06:38:31.578279 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.579652 kubelet[2865]: E0120 06:38:31.579536 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.580323 kubelet[2865]: E0120 06:38:31.580305 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.580538 kubelet[2865]: W0120 06:38:31.580409 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.581430 kubelet[2865]: E0120 06:38:31.581292 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.583719 kubelet[2865]: E0120 06:38:31.583237 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.583719 kubelet[2865]: W0120 06:38:31.583251 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.583719 kubelet[2865]: E0120 06:38:31.583261 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.585120 kubelet[2865]: E0120 06:38:31.584702 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.585120 kubelet[2865]: W0120 06:38:31.584719 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.590167 kubelet[2865]: E0120 06:38:31.588393 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.590167 kubelet[2865]: E0120 06:38:31.588496 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.590167 kubelet[2865]: W0120 06:38:31.588510 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.590167 kubelet[2865]: E0120 06:38:31.588530 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.597447 kubelet[2865]: E0120 06:38:31.596835 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.597447 kubelet[2865]: W0120 06:38:31.596857 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.597611 kubelet[2865]: E0120 06:38:31.597589 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.598441 kubelet[2865]: E0120 06:38:31.598427 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.598499 kubelet[2865]: W0120 06:38:31.598488 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.598554 kubelet[2865]: E0120 06:38:31.598543 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.603249 kubelet[2865]: E0120 06:38:31.602679 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.603249 kubelet[2865]: W0120 06:38:31.602698 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.603249 kubelet[2865]: E0120 06:38:31.602714 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.605554 kubelet[2865]: E0120 06:38:31.605497 2865 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:38:31.605554 kubelet[2865]: W0120 06:38:31.605511 2865 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:38:31.605554 kubelet[2865]: E0120 06:38:31.605524 2865 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:38:31.638357 containerd[1645]: time="2026-01-20T06:38:31.638318286Z" level=info msg="Container d71143392b7c7d7ed035c0cb19d472d5dc69da6d8fe82e337db3d1e0eba3f758: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:38:31.663469 containerd[1645]: time="2026-01-20T06:38:31.663215280Z" level=info msg="CreateContainer within sandbox \"f01573ff7c0655051a7aacc50afdf777b731c4eac69bef67ec8309602cf7e9e3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d71143392b7c7d7ed035c0cb19d472d5dc69da6d8fe82e337db3d1e0eba3f758\"" Jan 20 06:38:31.665433 containerd[1645]: time="2026-01-20T06:38:31.664336884Z" level=info msg="StartContainer for \"d71143392b7c7d7ed035c0cb19d472d5dc69da6d8fe82e337db3d1e0eba3f758\"" Jan 20 06:38:31.674575 containerd[1645]: time="2026-01-20T06:38:31.673635986Z" level=info msg="connecting to shim d71143392b7c7d7ed035c0cb19d472d5dc69da6d8fe82e337db3d1e0eba3f758" address="unix:///run/containerd/s/e86dd04ad0392cb7d8097839e6254d1667a726e89a4b03b62cba0101618aeaf3" protocol=ttrpc version=3 Jan 20 06:38:31.811368 systemd[1]: Started cri-containerd-d71143392b7c7d7ed035c0cb19d472d5dc69da6d8fe82e337db3d1e0eba3f758.scope - libcontainer container d71143392b7c7d7ed035c0cb19d472d5dc69da6d8fe82e337db3d1e0eba3f758. Jan 20 06:38:31.937000 audit: BPF prog-id=169 op=LOAD Jan 20 06:38:31.937000 audit[3557]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000e8488 a2=98 a3=0 items=0 ppid=3422 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:31.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437313134333339326237633764376564303335633063623139643437 Jan 20 06:38:31.938000 audit: BPF prog-id=170 op=LOAD Jan 20 06:38:31.938000 audit[3557]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000e8218 a2=98 a3=0 items=0 ppid=3422 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:31.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437313134333339326237633764376564303335633063623139643437 Jan 20 06:38:31.938000 audit: BPF prog-id=170 op=UNLOAD Jan 20 06:38:31.938000 audit[3557]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:31.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437313134333339326237633764376564303335633063623139643437 Jan 20 06:38:31.938000 audit: BPF prog-id=169 op=UNLOAD Jan 20 06:38:31.938000 audit[3557]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:31.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437313134333339326237633764376564303335633063623139643437 Jan 20 06:38:31.938000 audit: BPF prog-id=171 op=LOAD Jan 20 06:38:31.938000 audit[3557]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000e86e8 a2=98 a3=0 items=0 ppid=3422 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:31.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437313134333339326237633764376564303335633063623139643437 Jan 20 06:38:32.028269 containerd[1645]: time="2026-01-20T06:38:32.027287688Z" level=info msg="StartContainer for \"d71143392b7c7d7ed035c0cb19d472d5dc69da6d8fe82e337db3d1e0eba3f758\" returns successfully" Jan 20 06:38:32.055324 systemd[1]: cri-containerd-d71143392b7c7d7ed035c0cb19d472d5dc69da6d8fe82e337db3d1e0eba3f758.scope: Deactivated successfully. Jan 20 06:38:32.060129 containerd[1645]: time="2026-01-20T06:38:32.059002330Z" level=info msg="received container exit event container_id:\"d71143392b7c7d7ed035c0cb19d472d5dc69da6d8fe82e337db3d1e0eba3f758\" id:\"d71143392b7c7d7ed035c0cb19d472d5dc69da6d8fe82e337db3d1e0eba3f758\" pid:3570 exited_at:{seconds:1768891112 nanos:57360578}" Jan 20 06:38:32.060000 audit: BPF prog-id=171 op=UNLOAD Jan 20 06:38:32.072177 kubelet[2865]: E0120 06:38:32.071466 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:38:32.200830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d71143392b7c7d7ed035c0cb19d472d5dc69da6d8fe82e337db3d1e0eba3f758-rootfs.mount: Deactivated successfully. Jan 20 06:38:32.411163 kubelet[2865]: E0120 06:38:32.410378 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:32.411642 kubelet[2865]: E0120 06:38:32.411284 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:32.413501 containerd[1645]: time="2026-01-20T06:38:32.412842222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 06:38:32.459261 kubelet[2865]: I0120 06:38:32.456631 2865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76f67f64d8-stpp6" podStartSLOduration=4.395146123 podStartE2EDuration="8.456607589s" podCreationTimestamp="2026-01-20 06:38:24 +0000 UTC" firstStartedPulling="2026-01-20 06:38:26.334348158 +0000 UTC m=+31.540824191" lastFinishedPulling="2026-01-20 06:38:30.395809624 +0000 UTC m=+35.602285657" observedRunningTime="2026-01-20 06:38:31.468705893 +0000 UTC m=+36.675181936" watchObservedRunningTime="2026-01-20 06:38:32.456607589 +0000 UTC m=+37.663083622" Jan 20 06:38:32.548000 audit[3611]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:32.548000 audit[3611]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd82e07ca0 a2=0 a3=7ffd82e07c8c items=0 ppid=2978 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:32.548000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:32.556000 audit[3611]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=3611 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:38:32.556000 audit[3611]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd82e07ca0 a2=0 a3=7ffd82e07c8c items=0 ppid=2978 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:32.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:38:33.420741 kubelet[2865]: E0120 06:38:33.420211 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:34.073012 kubelet[2865]: E0120 06:38:34.072401 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:38:36.070864 kubelet[2865]: E0120 06:38:36.070803 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:38:38.070657 kubelet[2865]: E0120 06:38:38.070492 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:38:38.775275 containerd[1645]: time="2026-01-20T06:38:38.774483042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:38.778856 containerd[1645]: time="2026-01-20T06:38:38.778817544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 20 06:38:38.781467 containerd[1645]: time="2026-01-20T06:38:38.781430815Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:38.789240 containerd[1645]: time="2026-01-20T06:38:38.788814195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:38:38.789240 containerd[1645]: time="2026-01-20T06:38:38.789779396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.37689225s" Jan 20 06:38:38.789240 containerd[1645]: time="2026-01-20T06:38:38.789809281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 06:38:38.799339 containerd[1645]: time="2026-01-20T06:38:38.797786865Z" level=info msg="CreateContainer within sandbox \"f01573ff7c0655051a7aacc50afdf777b731c4eac69bef67ec8309602cf7e9e3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 06:38:38.828240 containerd[1645]: time="2026-01-20T06:38:38.827594949Z" level=info msg="Container f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:38:38.872633 containerd[1645]: time="2026-01-20T06:38:38.870549458Z" level=info msg="CreateContainer within sandbox \"f01573ff7c0655051a7aacc50afdf777b731c4eac69bef67ec8309602cf7e9e3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e\"" Jan 20 06:38:38.875909 containerd[1645]: time="2026-01-20T06:38:38.875671997Z" level=info msg="StartContainer for \"f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e\"" Jan 20 06:38:38.879717 containerd[1645]: time="2026-01-20T06:38:38.879604322Z" level=info msg="connecting to shim f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e" address="unix:///run/containerd/s/e86dd04ad0392cb7d8097839e6254d1667a726e89a4b03b62cba0101618aeaf3" protocol=ttrpc version=3 Jan 20 06:38:38.960576 systemd[1]: Started cri-containerd-f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e.scope - libcontainer container f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e. Jan 20 06:38:39.131000 audit: BPF prog-id=172 op=LOAD Jan 20 06:38:39.142482 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 20 06:38:39.142595 kernel: audit: type=1334 audit(1768891119.131:577): prog-id=172 op=LOAD Jan 20 06:38:39.131000 audit[3621]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3422 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:39.197619 kernel: audit: type=1300 audit(1768891119.131:577): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3422 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:39.197759 kernel: audit: type=1327 audit(1768891119.131:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653731333236663633356433356632373561396366343832653665 Jan 20 06:38:39.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653731333236663633356433356632373561396366343832653665 Jan 20 06:38:39.239912 kernel: audit: type=1334 audit(1768891119.131:578): prog-id=173 op=LOAD Jan 20 06:38:39.131000 audit: BPF prog-id=173 op=LOAD Jan 20 06:38:39.131000 audit[3621]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3422 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:39.296446 kernel: audit: type=1300 audit(1768891119.131:578): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3422 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:39.296772 kernel: audit: type=1327 audit(1768891119.131:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653731333236663633356433356632373561396366343832653665 Jan 20 06:38:39.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653731333236663633356433356632373561396366343832653665 Jan 20 06:38:39.132000 audit: BPF prog-id=173 op=UNLOAD Jan 20 06:38:39.356167 kernel: audit: type=1334 audit(1768891119.132:579): prog-id=173 op=UNLOAD Jan 20 06:38:39.132000 audit[3621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:39.387708 containerd[1645]: time="2026-01-20T06:38:39.384806071Z" level=info msg="StartContainer for \"f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e\" returns successfully" Jan 20 06:38:39.405417 kernel: audit: type=1300 audit(1768891119.132:579): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:39.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653731333236663633356433356632373561396366343832653665 Jan 20 06:38:39.132000 audit: BPF prog-id=172 op=UNLOAD Jan 20 06:38:39.462490 kernel: audit: type=1327 audit(1768891119.132:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653731333236663633356433356632373561396366343832653665 Jan 20 06:38:39.462632 kernel: audit: type=1334 audit(1768891119.132:580): prog-id=172 op=UNLOAD Jan 20 06:38:39.132000 audit[3621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:39.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653731333236663633356433356632373561396366343832653665 Jan 20 06:38:39.132000 audit: BPF prog-id=174 op=LOAD Jan 20 06:38:39.132000 audit[3621]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3422 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:38:39.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631653731333236663633356433356632373561396366343832653665 Jan 20 06:38:39.511796 kubelet[2865]: E0120 06:38:39.511572 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:40.072852 kubelet[2865]: E0120 06:38:40.072327 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:38:40.517365 kubelet[2865]: E0120 06:38:40.516558 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:42.059740 systemd[1]: cri-containerd-f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e.scope: Deactivated successfully. Jan 20 06:38:42.060572 systemd[1]: cri-containerd-f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e.scope: Consumed 2.871s CPU time, 175.5M memory peak, 3.6M read from disk, 171.3M written to disk. Jan 20 06:38:42.063000 audit: BPF prog-id=174 op=UNLOAD Jan 20 06:38:42.071421 kubelet[2865]: E0120 06:38:42.070910 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:38:42.078330 containerd[1645]: time="2026-01-20T06:38:42.075635264Z" level=info msg="received container exit event container_id:\"f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e\" id:\"f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e\" pid:3634 exited_at:{seconds:1768891122 nanos:66881066}" Jan 20 06:38:42.176267 kubelet[2865]: I0120 06:38:42.175819 2865 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 06:38:42.216306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1e71326f635d35f275a9cf482e6efa1cf84a93440f976146bbbd5bf3a76678e-rootfs.mount: Deactivated successfully. Jan 20 06:38:42.363499 kubelet[2865]: I0120 06:38:42.360338 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqqgh\" (UniqueName: \"kubernetes.io/projected/1fb741a2-9573-41fd-9b50-18c9b4a4a79a-kube-api-access-qqqgh\") pod \"calico-kube-controllers-54fdff59b4-bvgmz\" (UID: \"1fb741a2-9573-41fd-9b50-18c9b4a4a79a\") " pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" Jan 20 06:38:42.363499 kubelet[2865]: I0120 06:38:42.360397 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb741a2-9573-41fd-9b50-18c9b4a4a79a-tigera-ca-bundle\") pod \"calico-kube-controllers-54fdff59b4-bvgmz\" (UID: \"1fb741a2-9573-41fd-9b50-18c9b4a4a79a\") " pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" Jan 20 06:38:42.388257 systemd[1]: Created slice kubepods-besteffort-pod1fb741a2_9573_41fd_9b50_18c9b4a4a79a.slice - libcontainer container kubepods-besteffort-pod1fb741a2_9573_41fd_9b50_18c9b4a4a79a.slice. Jan 20 06:38:42.409445 systemd[1]: Created slice kubepods-besteffort-podfdd5baaa_865a_43eb_a3a6_626c707ee467.slice - libcontainer container kubepods-besteffort-podfdd5baaa_865a_43eb_a3a6_626c707ee467.slice. Jan 20 06:38:42.438804 systemd[1]: Created slice kubepods-besteffort-pod1d1bd19b_efe8_47e1_8a7a_7256f246c0d1.slice - libcontainer container kubepods-besteffort-pod1d1bd19b_efe8_47e1_8a7a_7256f246c0d1.slice. Jan 20 06:38:42.456900 systemd[1]: Created slice kubepods-burstable-podfad6472f_e56c_45a1_b03c_51f4a6fda495.slice - libcontainer container kubepods-burstable-podfad6472f_e56c_45a1_b03c_51f4a6fda495.slice. Jan 20 06:38:42.476819 systemd[1]: Created slice kubepods-besteffort-pod6d1208c8_db25_4d18_a483_24a1de720368.slice - libcontainer container kubepods-besteffort-pod6d1208c8_db25_4d18_a483_24a1de720368.slice. Jan 20 06:38:42.494302 systemd[1]: Created slice kubepods-burstable-pod386fb045_c424_4905_ac49_b24568eb8b4b.slice - libcontainer container kubepods-burstable-pod386fb045_c424_4905_ac49_b24568eb8b4b.slice. Jan 20 06:38:42.529698 systemd[1]: Created slice kubepods-besteffort-pod1b97c41d_4ead_4c93_97f0_70532331e2e7.slice - libcontainer container kubepods-besteffort-pod1b97c41d_4ead_4c93_97f0_70532331e2e7.slice. Jan 20 06:38:42.548787 systemd[1]: Created slice kubepods-besteffort-pod8605c7f4_dda9_48f9_8faf_f356da42c13a.slice - libcontainer container kubepods-besteffort-pod8605c7f4_dda9_48f9_8faf_f356da42c13a.slice. Jan 20 06:38:42.564297 kubelet[2865]: I0120 06:38:42.563360 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b97c41d-4ead-4c93-97f0-70532331e2e7-calico-apiserver-certs\") pod \"calico-apiserver-5bb7ff584c-brrnn\" (UID: \"1b97c41d-4ead-4c93-97f0-70532331e2e7\") " pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" Jan 20 06:38:42.564297 kubelet[2865]: I0120 06:38:42.563420 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npjsv\" (UniqueName: \"kubernetes.io/projected/6d1208c8-db25-4d18-a483-24a1de720368-kube-api-access-npjsv\") pod \"whisker-6c948d9fcd-285bx\" (UID: \"6d1208c8-db25-4d18-a483-24a1de720368\") " pod="calico-system/whisker-6c948d9fcd-285bx" Jan 20 06:38:42.564297 kubelet[2865]: I0120 06:38:42.563452 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8605c7f4-dda9-48f9-8faf-f356da42c13a-calico-apiserver-certs\") pod \"calico-apiserver-6f8db8dd5b-5v8sm\" (UID: \"8605c7f4-dda9-48f9-8faf-f356da42c13a\") " pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" Jan 20 06:38:42.564297 kubelet[2865]: I0120 06:38:42.563481 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1d1bd19b-efe8-47e1-8a7a-7256f246c0d1-goldmane-key-pair\") pod \"goldmane-666569f655-grqpc\" (UID: \"1d1bd19b-efe8-47e1-8a7a-7256f246c0d1\") " pod="calico-system/goldmane-666569f655-grqpc" Jan 20 06:38:42.564297 kubelet[2865]: I0120 06:38:42.563504 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/386fb045-c424-4905-ac49-b24568eb8b4b-config-volume\") pod \"coredns-668d6bf9bc-t5gmg\" (UID: \"386fb045-c424-4905-ac49-b24568eb8b4b\") " pod="kube-system/coredns-668d6bf9bc-t5gmg" Jan 20 06:38:42.564595 kubelet[2865]: I0120 06:38:42.563532 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdvt6\" (UniqueName: \"kubernetes.io/projected/fdd5baaa-865a-43eb-a3a6-626c707ee467-kube-api-access-hdvt6\") pod \"calico-apiserver-6f8db8dd5b-nqfrx\" (UID: \"fdd5baaa-865a-43eb-a3a6-626c707ee467\") " pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" Jan 20 06:38:42.564595 kubelet[2865]: I0120 06:38:42.563559 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g29tx\" (UniqueName: \"kubernetes.io/projected/1d1bd19b-efe8-47e1-8a7a-7256f246c0d1-kube-api-access-g29tx\") pod \"goldmane-666569f655-grqpc\" (UID: \"1d1bd19b-efe8-47e1-8a7a-7256f246c0d1\") " pod="calico-system/goldmane-666569f655-grqpc" Jan 20 06:38:42.564595 kubelet[2865]: I0120 06:38:42.563583 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjprs\" (UniqueName: \"kubernetes.io/projected/386fb045-c424-4905-ac49-b24568eb8b4b-kube-api-access-vjprs\") pod \"coredns-668d6bf9bc-t5gmg\" (UID: \"386fb045-c424-4905-ac49-b24568eb8b4b\") " pod="kube-system/coredns-668d6bf9bc-t5gmg" Jan 20 06:38:42.564595 kubelet[2865]: I0120 06:38:42.563611 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tlcw\" (UniqueName: \"kubernetes.io/projected/8605c7f4-dda9-48f9-8faf-f356da42c13a-kube-api-access-4tlcw\") pod \"calico-apiserver-6f8db8dd5b-5v8sm\" (UID: \"8605c7f4-dda9-48f9-8faf-f356da42c13a\") " pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" Jan 20 06:38:42.564595 kubelet[2865]: I0120 06:38:42.563640 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzkl9\" (UniqueName: \"kubernetes.io/projected/fad6472f-e56c-45a1-b03c-51f4a6fda495-kube-api-access-bzkl9\") pod \"coredns-668d6bf9bc-728fw\" (UID: \"fad6472f-e56c-45a1-b03c-51f4a6fda495\") " pod="kube-system/coredns-668d6bf9bc-728fw" Jan 20 06:38:42.564820 kubelet[2865]: I0120 06:38:42.563670 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d1208c8-db25-4d18-a483-24a1de720368-whisker-ca-bundle\") pod \"whisker-6c948d9fcd-285bx\" (UID: \"6d1208c8-db25-4d18-a483-24a1de720368\") " pod="calico-system/whisker-6c948d9fcd-285bx" Jan 20 06:38:42.564820 kubelet[2865]: I0120 06:38:42.563693 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d1bd19b-efe8-47e1-8a7a-7256f246c0d1-goldmane-ca-bundle\") pod \"goldmane-666569f655-grqpc\" (UID: \"1d1bd19b-efe8-47e1-8a7a-7256f246c0d1\") " pod="calico-system/goldmane-666569f655-grqpc" Jan 20 06:38:42.564820 kubelet[2865]: I0120 06:38:42.563714 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1bd19b-efe8-47e1-8a7a-7256f246c0d1-config\") pod \"goldmane-666569f655-grqpc\" (UID: \"1d1bd19b-efe8-47e1-8a7a-7256f246c0d1\") " pod="calico-system/goldmane-666569f655-grqpc" Jan 20 06:38:42.564820 kubelet[2865]: I0120 06:38:42.563738 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wzwd\" (UniqueName: \"kubernetes.io/projected/1b97c41d-4ead-4c93-97f0-70532331e2e7-kube-api-access-9wzwd\") pod \"calico-apiserver-5bb7ff584c-brrnn\" (UID: \"1b97c41d-4ead-4c93-97f0-70532331e2e7\") " pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" Jan 20 06:38:42.564820 kubelet[2865]: I0120 06:38:42.563769 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6d1208c8-db25-4d18-a483-24a1de720368-whisker-backend-key-pair\") pod \"whisker-6c948d9fcd-285bx\" (UID: \"6d1208c8-db25-4d18-a483-24a1de720368\") " pod="calico-system/whisker-6c948d9fcd-285bx" Jan 20 06:38:42.564938 kubelet[2865]: I0120 06:38:42.563862 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fdd5baaa-865a-43eb-a3a6-626c707ee467-calico-apiserver-certs\") pod \"calico-apiserver-6f8db8dd5b-nqfrx\" (UID: \"fdd5baaa-865a-43eb-a3a6-626c707ee467\") " pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" Jan 20 06:38:42.564938 kubelet[2865]: I0120 06:38:42.563900 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fad6472f-e56c-45a1-b03c-51f4a6fda495-config-volume\") pod \"coredns-668d6bf9bc-728fw\" (UID: \"fad6472f-e56c-45a1-b03c-51f4a6fda495\") " pod="kube-system/coredns-668d6bf9bc-728fw" Jan 20 06:38:42.573353 kubelet[2865]: E0120 06:38:42.572821 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:42.577504 containerd[1645]: time="2026-01-20T06:38:42.577388241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 06:38:42.716641 containerd[1645]: time="2026-01-20T06:38:42.714655364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdff59b4-bvgmz,Uid:1fb741a2-9573-41fd-9b50-18c9b4a4a79a,Namespace:calico-system,Attempt:0,}" Jan 20 06:38:42.787708 containerd[1645]: time="2026-01-20T06:38:42.787489971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c948d9fcd-285bx,Uid:6d1208c8-db25-4d18-a483-24a1de720368,Namespace:calico-system,Attempt:0,}" Jan 20 06:38:42.820836 kubelet[2865]: E0120 06:38:42.819419 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:42.828702 containerd[1645]: time="2026-01-20T06:38:42.828659362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t5gmg,Uid:386fb045-c424-4905-ac49-b24568eb8b4b,Namespace:kube-system,Attempt:0,}" Jan 20 06:38:42.858278 containerd[1645]: time="2026-01-20T06:38:42.852939682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb7ff584c-brrnn,Uid:1b97c41d-4ead-4c93-97f0-70532331e2e7,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:38:42.868771 containerd[1645]: time="2026-01-20T06:38:42.868562022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-5v8sm,Uid:8605c7f4-dda9-48f9-8faf-f356da42c13a,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:38:43.029462 containerd[1645]: time="2026-01-20T06:38:43.027459647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-nqfrx,Uid:fdd5baaa-865a-43eb-a3a6-626c707ee467,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:38:43.058304 containerd[1645]: time="2026-01-20T06:38:43.057631294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-grqpc,Uid:1d1bd19b-efe8-47e1-8a7a-7256f246c0d1,Namespace:calico-system,Attempt:0,}" Jan 20 06:38:43.076286 kubelet[2865]: E0120 06:38:43.075622 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:43.081538 containerd[1645]: time="2026-01-20T06:38:43.080744281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-728fw,Uid:fad6472f-e56c-45a1-b03c-51f4a6fda495,Namespace:kube-system,Attempt:0,}" Jan 20 06:38:43.670848 containerd[1645]: time="2026-01-20T06:38:43.670789402Z" level=error msg="Failed to destroy network for sandbox \"808fbcf03cf4baa3eff4c1246201e00fbe1a01b4c04fee4df9619b96dea4bd8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.672905 containerd[1645]: time="2026-01-20T06:38:43.672669966Z" level=error msg="Failed to destroy network for sandbox \"55116877d1bb1f33787030b0b74a5845b8edaf5d55849ae95de8a3edb822b236\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.678287 systemd[1]: run-netns-cni\x2d84f372a1\x2debe3\x2d2e8d\x2d5676\x2dfa566ab1d600.mount: Deactivated successfully. Jan 20 06:38:43.685756 systemd[1]: run-netns-cni\x2d318064b6\x2d7a79\x2decce\x2d9b01\x2dabb2e46e53d7.mount: Deactivated successfully. Jan 20 06:38:43.708494 containerd[1645]: time="2026-01-20T06:38:43.708227058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-grqpc,Uid:1d1bd19b-efe8-47e1-8a7a-7256f246c0d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55116877d1bb1f33787030b0b74a5845b8edaf5d55849ae95de8a3edb822b236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.708494 containerd[1645]: time="2026-01-20T06:38:43.708306945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb7ff584c-brrnn,Uid:1b97c41d-4ead-4c93-97f0-70532331e2e7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"808fbcf03cf4baa3eff4c1246201e00fbe1a01b4c04fee4df9619b96dea4bd8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.709602 kubelet[2865]: E0120 06:38:43.708759 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"808fbcf03cf4baa3eff4c1246201e00fbe1a01b4c04fee4df9619b96dea4bd8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.709602 kubelet[2865]: E0120 06:38:43.708845 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"808fbcf03cf4baa3eff4c1246201e00fbe1a01b4c04fee4df9619b96dea4bd8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" Jan 20 06:38:43.709602 kubelet[2865]: E0120 06:38:43.708875 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"808fbcf03cf4baa3eff4c1246201e00fbe1a01b4c04fee4df9619b96dea4bd8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" Jan 20 06:38:43.709750 kubelet[2865]: E0120 06:38:43.708929 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bb7ff584c-brrnn_calico-apiserver(1b97c41d-4ead-4c93-97f0-70532331e2e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bb7ff584c-brrnn_calico-apiserver(1b97c41d-4ead-4c93-97f0-70532331e2e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"808fbcf03cf4baa3eff4c1246201e00fbe1a01b4c04fee4df9619b96dea4bd8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:38:43.716800 kubelet[2865]: E0120 06:38:43.716340 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55116877d1bb1f33787030b0b74a5845b8edaf5d55849ae95de8a3edb822b236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.716800 kubelet[2865]: E0120 06:38:43.716510 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55116877d1bb1f33787030b0b74a5845b8edaf5d55849ae95de8a3edb822b236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-grqpc" Jan 20 06:38:43.716800 kubelet[2865]: E0120 06:38:43.716538 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55116877d1bb1f33787030b0b74a5845b8edaf5d55849ae95de8a3edb822b236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-grqpc" Jan 20 06:38:43.717360 kubelet[2865]: E0120 06:38:43.716586 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-grqpc_calico-system(1d1bd19b-efe8-47e1-8a7a-7256f246c0d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-grqpc_calico-system(1d1bd19b-efe8-47e1-8a7a-7256f246c0d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55116877d1bb1f33787030b0b74a5845b8edaf5d55849ae95de8a3edb822b236\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:38:43.808535 containerd[1645]: time="2026-01-20T06:38:43.808485295Z" level=error msg="Failed to destroy network for sandbox \"3e01482fcca1bd2f5c69656fa49d717669bd56ab7e299a783895e2215f4dc52a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.814778 containerd[1645]: time="2026-01-20T06:38:43.808742214Z" level=error msg="Failed to destroy network for sandbox \"c69fa317431a698d98a20519d36a63cc03492e6e6d8421815b4c76a65615a28b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.818527 systemd[1]: run-netns-cni\x2dbbde8858\x2db401\x2df7ac\x2df346\x2d1eee36d55d47.mount: Deactivated successfully. Jan 20 06:38:43.876595 containerd[1645]: time="2026-01-20T06:38:43.875870317Z" level=error msg="Failed to destroy network for sandbox \"1f8289f9b48fb05d407c325eec02fb2ace66551378f5d6b89b3256e15c53c500\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.885955 containerd[1645]: time="2026-01-20T06:38:43.877811227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c948d9fcd-285bx,Uid:6d1208c8-db25-4d18-a483-24a1de720368,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e01482fcca1bd2f5c69656fa49d717669bd56ab7e299a783895e2215f4dc52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.888342 containerd[1645]: time="2026-01-20T06:38:43.886931230Z" level=error msg="Failed to destroy network for sandbox \"e69a11b52b0d87a3b3bc9d617da8b91def570e1407dae152e964a5ee3cb35197\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.892497 kubelet[2865]: E0120 06:38:43.892454 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e01482fcca1bd2f5c69656fa49d717669bd56ab7e299a783895e2215f4dc52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.895356 kubelet[2865]: E0120 06:38:43.895323 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e01482fcca1bd2f5c69656fa49d717669bd56ab7e299a783895e2215f4dc52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c948d9fcd-285bx" Jan 20 06:38:43.896388 kubelet[2865]: E0120 06:38:43.896359 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e01482fcca1bd2f5c69656fa49d717669bd56ab7e299a783895e2215f4dc52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c948d9fcd-285bx" Jan 20 06:38:43.896629 kubelet[2865]: E0120 06:38:43.896592 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c948d9fcd-285bx_calico-system(6d1208c8-db25-4d18-a483-24a1de720368)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c948d9fcd-285bx_calico-system(6d1208c8-db25-4d18-a483-24a1de720368)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e01482fcca1bd2f5c69656fa49d717669bd56ab7e299a783895e2215f4dc52a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c948d9fcd-285bx" podUID="6d1208c8-db25-4d18-a483-24a1de720368" Jan 20 06:38:43.897820 containerd[1645]: time="2026-01-20T06:38:43.897721463Z" level=error msg="Failed to destroy network for sandbox \"e3d5f6c588cb106c41a51f2ebf3830ce1255f066f837cf99d4c3197dbd6eaf80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.905732 containerd[1645]: time="2026-01-20T06:38:43.904891658Z" level=error msg="Failed to destroy network for sandbox \"cfa2d213abcf531019b37073e38b455546d5d80d1f3e781ec96388a712358784\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.909879 containerd[1645]: time="2026-01-20T06:38:43.908823453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-5v8sm,Uid:8605c7f4-dda9-48f9-8faf-f356da42c13a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c69fa317431a698d98a20519d36a63cc03492e6e6d8421815b4c76a65615a28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.910916 kubelet[2865]: E0120 06:38:43.909466 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c69fa317431a698d98a20519d36a63cc03492e6e6d8421815b4c76a65615a28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.910916 kubelet[2865]: E0120 06:38:43.909624 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c69fa317431a698d98a20519d36a63cc03492e6e6d8421815b4c76a65615a28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" Jan 20 06:38:43.910916 kubelet[2865]: E0120 06:38:43.909654 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c69fa317431a698d98a20519d36a63cc03492e6e6d8421815b4c76a65615a28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" Jan 20 06:38:43.911375 kubelet[2865]: E0120 06:38:43.909699 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8db8dd5b-5v8sm_calico-apiserver(8605c7f4-dda9-48f9-8faf-f356da42c13a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8db8dd5b-5v8sm_calico-apiserver(8605c7f4-dda9-48f9-8faf-f356da42c13a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c69fa317431a698d98a20519d36a63cc03492e6e6d8421815b4c76a65615a28b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:38:43.932596 containerd[1645]: time="2026-01-20T06:38:43.927729416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdff59b4-bvgmz,Uid:1fb741a2-9573-41fd-9b50-18c9b4a4a79a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e69a11b52b0d87a3b3bc9d617da8b91def570e1407dae152e964a5ee3cb35197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.932941 kubelet[2865]: E0120 06:38:43.928384 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e69a11b52b0d87a3b3bc9d617da8b91def570e1407dae152e964a5ee3cb35197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.932941 kubelet[2865]: E0120 06:38:43.928437 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e69a11b52b0d87a3b3bc9d617da8b91def570e1407dae152e964a5ee3cb35197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" Jan 20 06:38:43.932941 kubelet[2865]: E0120 06:38:43.928461 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e69a11b52b0d87a3b3bc9d617da8b91def570e1407dae152e964a5ee3cb35197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" Jan 20 06:38:43.933376 kubelet[2865]: E0120 06:38:43.928513 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54fdff59b4-bvgmz_calico-system(1fb741a2-9573-41fd-9b50-18c9b4a4a79a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54fdff59b4-bvgmz_calico-system(1fb741a2-9573-41fd-9b50-18c9b4a4a79a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e69a11b52b0d87a3b3bc9d617da8b91def570e1407dae152e964a5ee3cb35197\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:38:43.960808 containerd[1645]: time="2026-01-20T06:38:43.960525605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-728fw,Uid:fad6472f-e56c-45a1-b03c-51f4a6fda495,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d5f6c588cb106c41a51f2ebf3830ce1255f066f837cf99d4c3197dbd6eaf80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.964847 kubelet[2865]: E0120 06:38:43.964477 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d5f6c588cb106c41a51f2ebf3830ce1255f066f837cf99d4c3197dbd6eaf80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.964847 kubelet[2865]: E0120 06:38:43.964549 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d5f6c588cb106c41a51f2ebf3830ce1255f066f837cf99d4c3197dbd6eaf80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-728fw" Jan 20 06:38:43.964847 kubelet[2865]: E0120 06:38:43.964573 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d5f6c588cb106c41a51f2ebf3830ce1255f066f837cf99d4c3197dbd6eaf80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-728fw" Jan 20 06:38:43.965261 kubelet[2865]: E0120 06:38:43.964629 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-728fw_kube-system(fad6472f-e56c-45a1-b03c-51f4a6fda495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-728fw_kube-system(fad6472f-e56c-45a1-b03c-51f4a6fda495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3d5f6c588cb106c41a51f2ebf3830ce1255f066f837cf99d4c3197dbd6eaf80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-728fw" podUID="fad6472f-e56c-45a1-b03c-51f4a6fda495" Jan 20 06:38:43.970569 containerd[1645]: time="2026-01-20T06:38:43.968524824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-nqfrx,Uid:fdd5baaa-865a-43eb-a3a6-626c707ee467,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8289f9b48fb05d407c325eec02fb2ace66551378f5d6b89b3256e15c53c500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.971426 kubelet[2865]: E0120 06:38:43.969494 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8289f9b48fb05d407c325eec02fb2ace66551378f5d6b89b3256e15c53c500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.971426 kubelet[2865]: E0120 06:38:43.969536 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8289f9b48fb05d407c325eec02fb2ace66551378f5d6b89b3256e15c53c500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" Jan 20 06:38:43.971426 kubelet[2865]: E0120 06:38:43.969556 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8289f9b48fb05d407c325eec02fb2ace66551378f5d6b89b3256e15c53c500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" Jan 20 06:38:43.971565 kubelet[2865]: E0120 06:38:43.969617 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8db8dd5b-nqfrx_calico-apiserver(fdd5baaa-865a-43eb-a3a6-626c707ee467)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8db8dd5b-nqfrx_calico-apiserver(fdd5baaa-865a-43eb-a3a6-626c707ee467)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f8289f9b48fb05d407c325eec02fb2ace66551378f5d6b89b3256e15c53c500\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:38:43.973918 containerd[1645]: time="2026-01-20T06:38:43.973659785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t5gmg,Uid:386fb045-c424-4905-ac49-b24568eb8b4b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa2d213abcf531019b37073e38b455546d5d80d1f3e781ec96388a712358784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.977519 kubelet[2865]: E0120 06:38:43.975664 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa2d213abcf531019b37073e38b455546d5d80d1f3e781ec96388a712358784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:43.977519 kubelet[2865]: E0120 06:38:43.976611 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa2d213abcf531019b37073e38b455546d5d80d1f3e781ec96388a712358784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t5gmg" Jan 20 06:38:43.977519 kubelet[2865]: E0120 06:38:43.976647 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfa2d213abcf531019b37073e38b455546d5d80d1f3e781ec96388a712358784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t5gmg" Jan 20 06:38:43.977770 kubelet[2865]: E0120 06:38:43.976779 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t5gmg_kube-system(386fb045-c424-4905-ac49-b24568eb8b4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t5gmg_kube-system(386fb045-c424-4905-ac49-b24568eb8b4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfa2d213abcf531019b37073e38b455546d5d80d1f3e781ec96388a712358784\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t5gmg" podUID="386fb045-c424-4905-ac49-b24568eb8b4b" Jan 20 06:38:44.102757 systemd[1]: Created slice kubepods-besteffort-pod67f738e9_ce9e_42e1_a454_66084ff2d3ad.slice - libcontainer container kubepods-besteffort-pod67f738e9_ce9e_42e1_a454_66084ff2d3ad.slice. Jan 20 06:38:44.117606 containerd[1645]: time="2026-01-20T06:38:44.117563024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp869,Uid:67f738e9-ce9e-42e1-a454-66084ff2d3ad,Namespace:calico-system,Attempt:0,}" Jan 20 06:38:44.219605 systemd[1]: run-netns-cni\x2d53862b53\x2d9220\x2d663f\x2daf1e\x2d3c578c48e2ff.mount: Deactivated successfully. Jan 20 06:38:44.219940 systemd[1]: run-netns-cni\x2dfa68f117\x2da13a\x2dec2e\x2d14f7\x2dc7c43cebfbcb.mount: Deactivated successfully. Jan 20 06:38:44.220492 systemd[1]: run-netns-cni\x2d80f7f2c4\x2d8d79\x2da278\x2ddc17\x2d1487ecfec384.mount: Deactivated successfully. Jan 20 06:38:44.220562 systemd[1]: run-netns-cni\x2dfae35483\x2d383e\x2d5979\x2dd4d4\x2d33abf5d3bca0.mount: Deactivated successfully. Jan 20 06:38:44.220626 systemd[1]: run-netns-cni\x2d57a0835d\x2d3501\x2d3a7e\x2d49e8\x2d0f9fe1496b1d.mount: Deactivated successfully. Jan 20 06:38:44.536584 containerd[1645]: time="2026-01-20T06:38:44.536321620Z" level=error msg="Failed to destroy network for sandbox \"b02c43f31c259a60c1b38d6bfeff92e1c022c493b62e586cd781e2d00532cb89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:44.543559 systemd[1]: run-netns-cni\x2de558834b\x2d0b94\x2d5266\x2d4824\x2de5388f9e5fea.mount: Deactivated successfully. Jan 20 06:38:44.548320 containerd[1645]: time="2026-01-20T06:38:44.547952827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp869,Uid:67f738e9-ce9e-42e1-a454-66084ff2d3ad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02c43f31c259a60c1b38d6bfeff92e1c022c493b62e586cd781e2d00532cb89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:44.550493 kubelet[2865]: E0120 06:38:44.549279 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02c43f31c259a60c1b38d6bfeff92e1c022c493b62e586cd781e2d00532cb89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:38:44.550493 kubelet[2865]: E0120 06:38:44.549364 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02c43f31c259a60c1b38d6bfeff92e1c022c493b62e586cd781e2d00532cb89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kp869" Jan 20 06:38:44.550493 kubelet[2865]: E0120 06:38:44.549397 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02c43f31c259a60c1b38d6bfeff92e1c022c493b62e586cd781e2d00532cb89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kp869" Jan 20 06:38:44.551307 kubelet[2865]: E0120 06:38:44.549448 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b02c43f31c259a60c1b38d6bfeff92e1c022c493b62e586cd781e2d00532cb89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:38:55.132539 containerd[1645]: time="2026-01-20T06:38:55.131463730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-grqpc,Uid:1d1bd19b-efe8-47e1-8a7a-7256f246c0d1,Namespace:calico-system,Attempt:0,}" Jan 20 06:38:55.200552 containerd[1645]: time="2026-01-20T06:38:55.198721957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp869,Uid:67f738e9-ce9e-42e1-a454-66084ff2d3ad,Namespace:calico-system,Attempt:0,}" Jan 20 06:38:56.155918 kubelet[2865]: E0120 06:38:56.140942 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:56.231601 containerd[1645]: time="2026-01-20T06:38:56.227392704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c948d9fcd-285bx,Uid:6d1208c8-db25-4d18-a483-24a1de720368,Namespace:calico-system,Attempt:0,}" Jan 20 06:38:56.294466 containerd[1645]: time="2026-01-20T06:38:56.293603450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t5gmg,Uid:386fb045-c424-4905-ac49-b24568eb8b4b,Namespace:kube-system,Attempt:0,}" Jan 20 06:38:57.122489 kubelet[2865]: E0120 06:38:57.121554 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:38:57.295905 containerd[1645]: time="2026-01-20T06:38:57.295678495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-nqfrx,Uid:fdd5baaa-865a-43eb-a3a6-626c707ee467,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:38:57.664888 containerd[1645]: time="2026-01-20T06:38:57.624429235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-728fw,Uid:fad6472f-e56c-45a1-b03c-51f4a6fda495,Namespace:kube-system,Attempt:0,}" Jan 20 06:39:04.463001 kubelet[2865]: E0120 06:39:04.462372 2865 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.352s" Jan 20 06:39:04.669705 containerd[1645]: time="2026-01-20T06:39:04.669481941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-5v8sm,Uid:8605c7f4-dda9-48f9-8faf-f356da42c13a,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:39:04.676538 containerd[1645]: time="2026-01-20T06:39:04.676500417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb7ff584c-brrnn,Uid:1b97c41d-4ead-4c93-97f0-70532331e2e7,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:39:04.677480 containerd[1645]: time="2026-01-20T06:39:04.676975367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdff59b4-bvgmz,Uid:1fb741a2-9573-41fd-9b50-18c9b4a4a79a,Namespace:calico-system,Attempt:0,}" Jan 20 06:39:05.672800 containerd[1645]: time="2026-01-20T06:39:05.672521940Z" level=error msg="Failed to destroy network for sandbox \"17247eb85c5d10a3cc7c1789d1474e93de0a1de0ef759c31cb0b93eca10bdd2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:05.682765 systemd[1]: run-netns-cni\x2d8dcbc69c\x2d2ad4\x2d7496\x2d765c\x2d92a7908ac88d.mount: Deactivated successfully. Jan 20 06:39:05.726342 containerd[1645]: time="2026-01-20T06:39:05.725757458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp869,Uid:67f738e9-ce9e-42e1-a454-66084ff2d3ad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17247eb85c5d10a3cc7c1789d1474e93de0a1de0ef759c31cb0b93eca10bdd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:05.728516 kubelet[2865]: E0120 06:39:05.726603 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17247eb85c5d10a3cc7c1789d1474e93de0a1de0ef759c31cb0b93eca10bdd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:05.728516 kubelet[2865]: E0120 06:39:05.726679 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17247eb85c5d10a3cc7c1789d1474e93de0a1de0ef759c31cb0b93eca10bdd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kp869" Jan 20 06:39:05.728516 kubelet[2865]: E0120 06:39:05.726714 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17247eb85c5d10a3cc7c1789d1474e93de0a1de0ef759c31cb0b93eca10bdd2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kp869" Jan 20 06:39:05.729807 kubelet[2865]: E0120 06:39:05.727817 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17247eb85c5d10a3cc7c1789d1474e93de0a1de0ef759c31cb0b93eca10bdd2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:39:05.946556 containerd[1645]: time="2026-01-20T06:39:05.932924524Z" level=error msg="Failed to destroy network for sandbox \"f92535a4b0a794ea9da1b47fe88e3ee171adece2937a60f1109623d5ae9e2d90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:05.943756 systemd[1]: run-netns-cni\x2d59bc7953\x2db578\x2d643d\x2dd0e7\x2d8f56bfb5064b.mount: Deactivated successfully. Jan 20 06:39:05.959332 containerd[1645]: time="2026-01-20T06:39:05.953425312Z" level=error msg="Failed to destroy network for sandbox \"1b2e3a2f2b29c48b2281b5bf292e97bddddf1d47df614c4e0c654ade792458fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:05.961805 systemd[1]: run-netns-cni\x2d19d3e735\x2d4b7d\x2d29c6\x2df864\x2df7501406260a.mount: Deactivated successfully. Jan 20 06:39:05.979444 containerd[1645]: time="2026-01-20T06:39:05.978855778Z" level=error msg="Failed to destroy network for sandbox \"e6731bdcd1a606ab3f6226eed4ea094aac00022b37faebb53f7d222d03e784d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:05.991690 systemd[1]: run-netns-cni\x2d4ed38d77\x2dab2c\x2d6ac3\x2de9ce\x2ddbcea57dbfa4.mount: Deactivated successfully. Jan 20 06:39:06.025375 containerd[1645]: time="2026-01-20T06:39:06.010515342Z" level=error msg="Failed to destroy network for sandbox \"2bd9848a6e5146a2d2b9455e05e7b1d6d42f5471b775c91be5e31cf6175c69a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.028624 systemd[1]: run-netns-cni\x2d2d32b458\x2d7b3b\x2dad32\x2d5f33\x2d294d28698e2b.mount: Deactivated successfully. Jan 20 06:39:06.030567 containerd[1645]: time="2026-01-20T06:39:06.030401893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c948d9fcd-285bx,Uid:6d1208c8-db25-4d18-a483-24a1de720368,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2e3a2f2b29c48b2281b5bf292e97bddddf1d47df614c4e0c654ade792458fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.042234 kubelet[2865]: E0120 06:39:06.034016 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2e3a2f2b29c48b2281b5bf292e97bddddf1d47df614c4e0c654ade792458fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.042234 kubelet[2865]: E0120 06:39:06.034315 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2e3a2f2b29c48b2281b5bf292e97bddddf1d47df614c4e0c654ade792458fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c948d9fcd-285bx" Jan 20 06:39:06.042234 kubelet[2865]: E0120 06:39:06.034342 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b2e3a2f2b29c48b2281b5bf292e97bddddf1d47df614c4e0c654ade792458fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c948d9fcd-285bx" Jan 20 06:39:06.042625 containerd[1645]: time="2026-01-20T06:39:06.040791491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-728fw,Uid:fad6472f-e56c-45a1-b03c-51f4a6fda495,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6731bdcd1a606ab3f6226eed4ea094aac00022b37faebb53f7d222d03e784d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.042914 kubelet[2865]: E0120 06:39:06.040770 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c948d9fcd-285bx_calico-system(6d1208c8-db25-4d18-a483-24a1de720368)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c948d9fcd-285bx_calico-system(6d1208c8-db25-4d18-a483-24a1de720368)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b2e3a2f2b29c48b2281b5bf292e97bddddf1d47df614c4e0c654ade792458fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c948d9fcd-285bx" podUID="6d1208c8-db25-4d18-a483-24a1de720368" Jan 20 06:39:06.046978 kubelet[2865]: E0120 06:39:06.041981 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6731bdcd1a606ab3f6226eed4ea094aac00022b37faebb53f7d222d03e784d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.046978 kubelet[2865]: E0120 06:39:06.044640 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6731bdcd1a606ab3f6226eed4ea094aac00022b37faebb53f7d222d03e784d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-728fw" Jan 20 06:39:06.046978 kubelet[2865]: E0120 06:39:06.044674 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6731bdcd1a606ab3f6226eed4ea094aac00022b37faebb53f7d222d03e784d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-728fw" Jan 20 06:39:06.050487 kubelet[2865]: E0120 06:39:06.044740 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-728fw_kube-system(fad6472f-e56c-45a1-b03c-51f4a6fda495)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-728fw_kube-system(fad6472f-e56c-45a1-b03c-51f4a6fda495)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6731bdcd1a606ab3f6226eed4ea094aac00022b37faebb53f7d222d03e784d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-728fw" podUID="fad6472f-e56c-45a1-b03c-51f4a6fda495" Jan 20 06:39:06.064696 containerd[1645]: time="2026-01-20T06:39:06.063472014Z" level=error msg="Failed to destroy network for sandbox \"d0387fd7907e636f06941fc69f6d446e9e750f820f4648f5a9613d5053d8d55b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.094278 containerd[1645]: time="2026-01-20T06:39:06.090607236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb7ff584c-brrnn,Uid:1b97c41d-4ead-4c93-97f0-70532331e2e7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bd9848a6e5146a2d2b9455e05e7b1d6d42f5471b775c91be5e31cf6175c69a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.094278 containerd[1645]: time="2026-01-20T06:39:06.093277119Z" level=error msg="Failed to destroy network for sandbox \"5f24e98069a4055025f87b75b06fc18bf9b43872810bef9d43bb43f7f4bd1ec2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.106016 containerd[1645]: time="2026-01-20T06:39:06.105720757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-nqfrx,Uid:fdd5baaa-865a-43eb-a3a6-626c707ee467,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0387fd7907e636f06941fc69f6d446e9e750f820f4648f5a9613d5053d8d55b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.126272 kubelet[2865]: E0120 06:39:06.114467 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bd9848a6e5146a2d2b9455e05e7b1d6d42f5471b775c91be5e31cf6175c69a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.126272 kubelet[2865]: E0120 06:39:06.114540 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bd9848a6e5146a2d2b9455e05e7b1d6d42f5471b775c91be5e31cf6175c69a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" Jan 20 06:39:06.126272 kubelet[2865]: E0120 06:39:06.114569 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bd9848a6e5146a2d2b9455e05e7b1d6d42f5471b775c91be5e31cf6175c69a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" Jan 20 06:39:06.126865 containerd[1645]: time="2026-01-20T06:39:06.116551709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-5v8sm,Uid:8605c7f4-dda9-48f9-8faf-f356da42c13a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f92535a4b0a794ea9da1b47fe88e3ee171adece2937a60f1109623d5ae9e2d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.126865 containerd[1645]: time="2026-01-20T06:39:06.125370992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-grqpc,Uid:1d1bd19b-efe8-47e1-8a7a-7256f246c0d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f24e98069a4055025f87b75b06fc18bf9b43872810bef9d43bb43f7f4bd1ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.133983 kubelet[2865]: E0120 06:39:06.114620 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bb7ff584c-brrnn_calico-apiserver(1b97c41d-4ead-4c93-97f0-70532331e2e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bb7ff584c-brrnn_calico-apiserver(1b97c41d-4ead-4c93-97f0-70532331e2e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bd9848a6e5146a2d2b9455e05e7b1d6d42f5471b775c91be5e31cf6175c69a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:39:06.133983 kubelet[2865]: E0120 06:39:06.114669 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0387fd7907e636f06941fc69f6d446e9e750f820f4648f5a9613d5053d8d55b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.133983 kubelet[2865]: E0120 06:39:06.114711 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0387fd7907e636f06941fc69f6d446e9e750f820f4648f5a9613d5053d8d55b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" Jan 20 06:39:06.150003 kubelet[2865]: E0120 06:39:06.114734 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0387fd7907e636f06941fc69f6d446e9e750f820f4648f5a9613d5053d8d55b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" Jan 20 06:39:06.150003 kubelet[2865]: E0120 06:39:06.117872 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8db8dd5b-nqfrx_calico-apiserver(fdd5baaa-865a-43eb-a3a6-626c707ee467)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8db8dd5b-nqfrx_calico-apiserver(fdd5baaa-865a-43eb-a3a6-626c707ee467)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0387fd7907e636f06941fc69f6d446e9e750f820f4648f5a9613d5053d8d55b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:39:06.150003 kubelet[2865]: E0120 06:39:06.119999 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f92535a4b0a794ea9da1b47fe88e3ee171adece2937a60f1109623d5ae9e2d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.150720 kubelet[2865]: E0120 06:39:06.120481 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f92535a4b0a794ea9da1b47fe88e3ee171adece2937a60f1109623d5ae9e2d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" Jan 20 06:39:06.150720 kubelet[2865]: E0120 06:39:06.120508 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f92535a4b0a794ea9da1b47fe88e3ee171adece2937a60f1109623d5ae9e2d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" Jan 20 06:39:06.150720 kubelet[2865]: E0120 06:39:06.120546 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8db8dd5b-5v8sm_calico-apiserver(8605c7f4-dda9-48f9-8faf-f356da42c13a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8db8dd5b-5v8sm_calico-apiserver(8605c7f4-dda9-48f9-8faf-f356da42c13a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f92535a4b0a794ea9da1b47fe88e3ee171adece2937a60f1109623d5ae9e2d90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:39:06.151589 kubelet[2865]: E0120 06:39:06.134476 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f24e98069a4055025f87b75b06fc18bf9b43872810bef9d43bb43f7f4bd1ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.151589 kubelet[2865]: E0120 06:39:06.134545 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f24e98069a4055025f87b75b06fc18bf9b43872810bef9d43bb43f7f4bd1ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-grqpc" Jan 20 06:39:06.151589 kubelet[2865]: E0120 06:39:06.134572 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f24e98069a4055025f87b75b06fc18bf9b43872810bef9d43bb43f7f4bd1ec2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-grqpc" Jan 20 06:39:06.151894 kubelet[2865]: E0120 06:39:06.134620 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-grqpc_calico-system(1d1bd19b-efe8-47e1-8a7a-7256f246c0d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-grqpc_calico-system(1d1bd19b-efe8-47e1-8a7a-7256f246c0d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f24e98069a4055025f87b75b06fc18bf9b43872810bef9d43bb43f7f4bd1ec2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:39:06.181486 containerd[1645]: time="2026-01-20T06:39:06.178595412Z" level=error msg="Failed to destroy network for sandbox \"edb3db52c59684a38bed8be13bb52d7027f90a050db0f32058bf36303d10f1db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.203688 containerd[1645]: time="2026-01-20T06:39:06.203337824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdff59b4-bvgmz,Uid:1fb741a2-9573-41fd-9b50-18c9b4a4a79a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"edb3db52c59684a38bed8be13bb52d7027f90a050db0f32058bf36303d10f1db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.218646 kubelet[2865]: E0120 06:39:06.215811 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edb3db52c59684a38bed8be13bb52d7027f90a050db0f32058bf36303d10f1db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.219248 kubelet[2865]: E0120 06:39:06.218937 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edb3db52c59684a38bed8be13bb52d7027f90a050db0f32058bf36303d10f1db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" Jan 20 06:39:06.219248 kubelet[2865]: E0120 06:39:06.218969 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edb3db52c59684a38bed8be13bb52d7027f90a050db0f32058bf36303d10f1db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" Jan 20 06:39:06.219402 kubelet[2865]: E0120 06:39:06.219368 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54fdff59b4-bvgmz_calico-system(1fb741a2-9573-41fd-9b50-18c9b4a4a79a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54fdff59b4-bvgmz_calico-system(1fb741a2-9573-41fd-9b50-18c9b4a4a79a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edb3db52c59684a38bed8be13bb52d7027f90a050db0f32058bf36303d10f1db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:39:06.234446 containerd[1645]: time="2026-01-20T06:39:06.233956456Z" level=error msg="Failed to destroy network for sandbox \"c9c44132a8ef317a51af7328e3d7c57a5a983fe6419657d69e85714b0919b0cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.294766 containerd[1645]: time="2026-01-20T06:39:06.292611467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t5gmg,Uid:386fb045-c424-4905-ac49-b24568eb8b4b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c44132a8ef317a51af7328e3d7c57a5a983fe6419657d69e85714b0919b0cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.296568 kubelet[2865]: E0120 06:39:06.295679 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c44132a8ef317a51af7328e3d7c57a5a983fe6419657d69e85714b0919b0cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:06.296568 kubelet[2865]: E0120 06:39:06.296002 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c44132a8ef317a51af7328e3d7c57a5a983fe6419657d69e85714b0919b0cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t5gmg" Jan 20 06:39:06.296568 kubelet[2865]: E0120 06:39:06.296280 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c44132a8ef317a51af7328e3d7c57a5a983fe6419657d69e85714b0919b0cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t5gmg" Jan 20 06:39:06.296890 kubelet[2865]: E0120 06:39:06.296461 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t5gmg_kube-system(386fb045-c424-4905-ac49-b24568eb8b4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t5gmg_kube-system(386fb045-c424-4905-ac49-b24568eb8b4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9c44132a8ef317a51af7328e3d7c57a5a983fe6419657d69e85714b0919b0cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t5gmg" podUID="386fb045-c424-4905-ac49-b24568eb8b4b" Jan 20 06:39:06.681589 systemd[1]: run-netns-cni\x2d7cc476ba\x2d7497\x2db8ce\x2d30e3\x2d9fbfafb62720.mount: Deactivated successfully. Jan 20 06:39:06.681938 systemd[1]: run-netns-cni\x2d7f8eadcf\x2dea35\x2d2525\x2d93c7\x2d35ea3b2c0721.mount: Deactivated successfully. Jan 20 06:39:06.682589 systemd[1]: run-netns-cni\x2d8dd96b5a\x2d65d9\x2d585b\x2d1aa4\x2daf3c348351bc.mount: Deactivated successfully. Jan 20 06:39:06.682687 systemd[1]: run-netns-cni\x2d79d81300\x2d00c2\x2d2d95\x2df1b7\x2dc62cb88ab292.mount: Deactivated successfully. Jan 20 06:39:10.074636 kubelet[2865]: E0120 06:39:10.074443 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:13.074565 kubelet[2865]: E0120 06:39:13.074352 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:17.072452 kubelet[2865]: E0120 06:39:17.071493 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:17.075740 containerd[1645]: time="2026-01-20T06:39:17.073915562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c948d9fcd-285bx,Uid:6d1208c8-db25-4d18-a483-24a1de720368,Namespace:calico-system,Attempt:0,}" Jan 20 06:39:17.081457 containerd[1645]: time="2026-01-20T06:39:17.080529166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdff59b4-bvgmz,Uid:1fb741a2-9573-41fd-9b50-18c9b4a4a79a,Namespace:calico-system,Attempt:0,}" Jan 20 06:39:17.082548 containerd[1645]: time="2026-01-20T06:39:17.081979519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t5gmg,Uid:386fb045-c424-4905-ac49-b24568eb8b4b,Namespace:kube-system,Attempt:0,}" Jan 20 06:39:17.083720 containerd[1645]: time="2026-01-20T06:39:17.083681904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp869,Uid:67f738e9-ce9e-42e1-a454-66084ff2d3ad,Namespace:calico-system,Attempt:0,}" Jan 20 06:39:17.420880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1221203755.mount: Deactivated successfully. Jan 20 06:39:17.675566 containerd[1645]: time="2026-01-20T06:39:17.671895051Z" level=error msg="Failed to destroy network for sandbox \"50e056e6ca39bf828346a023abdfd83135ffa8ec932e30e6623ea8202e23a39d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:17.699776 containerd[1645]: time="2026-01-20T06:39:17.699460513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c948d9fcd-285bx,Uid:6d1208c8-db25-4d18-a483-24a1de720368,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50e056e6ca39bf828346a023abdfd83135ffa8ec932e30e6623ea8202e23a39d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:17.701706 kubelet[2865]: E0120 06:39:17.701579 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50e056e6ca39bf828346a023abdfd83135ffa8ec932e30e6623ea8202e23a39d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:17.701792 kubelet[2865]: E0120 06:39:17.701733 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50e056e6ca39bf828346a023abdfd83135ffa8ec932e30e6623ea8202e23a39d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c948d9fcd-285bx" Jan 20 06:39:17.701792 kubelet[2865]: E0120 06:39:17.701756 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50e056e6ca39bf828346a023abdfd83135ffa8ec932e30e6623ea8202e23a39d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c948d9fcd-285bx" Jan 20 06:39:17.705664 kubelet[2865]: E0120 06:39:17.705451 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c948d9fcd-285bx_calico-system(6d1208c8-db25-4d18-a483-24a1de720368)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c948d9fcd-285bx_calico-system(6d1208c8-db25-4d18-a483-24a1de720368)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50e056e6ca39bf828346a023abdfd83135ffa8ec932e30e6623ea8202e23a39d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c948d9fcd-285bx" podUID="6d1208c8-db25-4d18-a483-24a1de720368" Jan 20 06:39:17.706461 containerd[1645]: time="2026-01-20T06:39:17.705526818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:17.710594 containerd[1645]: time="2026-01-20T06:39:17.709997021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 20 06:39:17.714878 containerd[1645]: time="2026-01-20T06:39:17.714740522Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:17.737543 containerd[1645]: time="2026-01-20T06:39:17.736610897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:39:17.739601 containerd[1645]: time="2026-01-20T06:39:17.738950557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 35.161522843s" Jan 20 06:39:17.743941 containerd[1645]: time="2026-01-20T06:39:17.743402477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 06:39:17.806468 containerd[1645]: time="2026-01-20T06:39:17.805927918Z" level=info msg="CreateContainer within sandbox \"f01573ff7c0655051a7aacc50afdf777b731c4eac69bef67ec8309602cf7e9e3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 06:39:17.913789 containerd[1645]: time="2026-01-20T06:39:17.913728180Z" level=error msg="Failed to destroy network for sandbox \"00a66a0e33d3e96283b84b952faf9a2a5ce3e20c8ac8d0f8459a8e5294d56cb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:17.922962 containerd[1645]: time="2026-01-20T06:39:17.921933607Z" level=info msg="Container 7acfd2b8b16f291b1cdbd1c6df99a1a1feb86128a121d10859f6bc64890e2e4a: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:39:17.930361 containerd[1645]: time="2026-01-20T06:39:17.929546645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t5gmg,Uid:386fb045-c424-4905-ac49-b24568eb8b4b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a66a0e33d3e96283b84b952faf9a2a5ce3e20c8ac8d0f8459a8e5294d56cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:17.930780 kubelet[2865]: E0120 06:39:17.930702 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a66a0e33d3e96283b84b952faf9a2a5ce3e20c8ac8d0f8459a8e5294d56cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:17.930846 kubelet[2865]: E0120 06:39:17.930777 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a66a0e33d3e96283b84b952faf9a2a5ce3e20c8ac8d0f8459a8e5294d56cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t5gmg" Jan 20 06:39:17.930846 kubelet[2865]: E0120 06:39:17.930805 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a66a0e33d3e96283b84b952faf9a2a5ce3e20c8ac8d0f8459a8e5294d56cb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t5gmg" Jan 20 06:39:17.930927 kubelet[2865]: E0120 06:39:17.930851 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t5gmg_kube-system(386fb045-c424-4905-ac49-b24568eb8b4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t5gmg_kube-system(386fb045-c424-4905-ac49-b24568eb8b4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00a66a0e33d3e96283b84b952faf9a2a5ce3e20c8ac8d0f8459a8e5294d56cb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t5gmg" podUID="386fb045-c424-4905-ac49-b24568eb8b4b" Jan 20 06:39:17.950467 containerd[1645]: time="2026-01-20T06:39:17.949892079Z" level=error msg="Failed to destroy network for sandbox \"ed4d070ffe2fc423b61d1079d8ce7052ed6273648c4288efc6f37c5d30e45db7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:17.959688 containerd[1645]: time="2026-01-20T06:39:17.959551083Z" level=error msg="Failed to destroy network for sandbox \"195a77f287c8ae0eeaaa871036f1d2df15fb0aa3d0b1c01139d7de072d76e078\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:17.994829 containerd[1645]: time="2026-01-20T06:39:17.993722540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdff59b4-bvgmz,Uid:1fb741a2-9573-41fd-9b50-18c9b4a4a79a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed4d070ffe2fc423b61d1079d8ce7052ed6273648c4288efc6f37c5d30e45db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:18.002924 kubelet[2865]: E0120 06:39:18.002547 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed4d070ffe2fc423b61d1079d8ce7052ed6273648c4288efc6f37c5d30e45db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:18.009853 kubelet[2865]: E0120 06:39:18.003010 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed4d070ffe2fc423b61d1079d8ce7052ed6273648c4288efc6f37c5d30e45db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" Jan 20 06:39:18.009853 kubelet[2865]: E0120 06:39:18.007469 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed4d070ffe2fc423b61d1079d8ce7052ed6273648c4288efc6f37c5d30e45db7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" Jan 20 06:39:18.009853 kubelet[2865]: E0120 06:39:18.007687 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54fdff59b4-bvgmz_calico-system(1fb741a2-9573-41fd-9b50-18c9b4a4a79a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54fdff59b4-bvgmz_calico-system(1fb741a2-9573-41fd-9b50-18c9b4a4a79a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed4d070ffe2fc423b61d1079d8ce7052ed6273648c4288efc6f37c5d30e45db7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:39:18.067642 containerd[1645]: time="2026-01-20T06:39:18.066991956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp869,Uid:67f738e9-ce9e-42e1-a454-66084ff2d3ad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"195a77f287c8ae0eeaaa871036f1d2df15fb0aa3d0b1c01139d7de072d76e078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:18.069829 kubelet[2865]: E0120 06:39:18.069434 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195a77f287c8ae0eeaaa871036f1d2df15fb0aa3d0b1c01139d7de072d76e078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:18.069829 kubelet[2865]: E0120 06:39:18.069592 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195a77f287c8ae0eeaaa871036f1d2df15fb0aa3d0b1c01139d7de072d76e078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kp869" Jan 20 06:39:18.069829 kubelet[2865]: E0120 06:39:18.069614 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195a77f287c8ae0eeaaa871036f1d2df15fb0aa3d0b1c01139d7de072d76e078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kp869" Jan 20 06:39:18.070713 kubelet[2865]: E0120 06:39:18.069653 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"195a77f287c8ae0eeaaa871036f1d2df15fb0aa3d0b1c01139d7de072d76e078\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:39:18.086534 containerd[1645]: time="2026-01-20T06:39:18.085795725Z" level=info msg="CreateContainer within sandbox \"f01573ff7c0655051a7aacc50afdf777b731c4eac69bef67ec8309602cf7e9e3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7acfd2b8b16f291b1cdbd1c6df99a1a1feb86128a121d10859f6bc64890e2e4a\"" Jan 20 06:39:18.088457 containerd[1645]: time="2026-01-20T06:39:18.087536531Z" level=info msg="StartContainer for \"7acfd2b8b16f291b1cdbd1c6df99a1a1feb86128a121d10859f6bc64890e2e4a\"" Jan 20 06:39:18.093903 containerd[1645]: time="2026-01-20T06:39:18.093649355Z" level=info msg="connecting to shim 7acfd2b8b16f291b1cdbd1c6df99a1a1feb86128a121d10859f6bc64890e2e4a" address="unix:///run/containerd/s/e86dd04ad0392cb7d8097839e6254d1667a726e89a4b03b62cba0101618aeaf3" protocol=ttrpc version=3 Jan 20 06:39:18.147990 systemd[1]: run-netns-cni\x2df09693c5\x2d89b0\x2d7f61\x2d3cd7\x2de8bc9f170cd5.mount: Deactivated successfully. Jan 20 06:39:18.149536 systemd[1]: run-netns-cni\x2d363296c2\x2dcacd\x2d1be4\x2da9bb\x2da7627abab2eb.mount: Deactivated successfully. Jan 20 06:39:18.149636 systemd[1]: run-netns-cni\x2d356f415e\x2d4730\x2db45e\x2d498a\x2d6947e1005c83.mount: Deactivated successfully. Jan 20 06:39:18.149728 systemd[1]: run-netns-cni\x2d15bd12f5\x2de5f6\x2da3f8\x2d4916\x2d5b17f5924b4e.mount: Deactivated successfully. Jan 20 06:39:18.320543 systemd[1]: Started cri-containerd-7acfd2b8b16f291b1cdbd1c6df99a1a1feb86128a121d10859f6bc64890e2e4a.scope - libcontainer container 7acfd2b8b16f291b1cdbd1c6df99a1a1feb86128a121d10859f6bc64890e2e4a. Jan 20 06:39:18.607961 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 20 06:39:18.609518 kernel: audit: type=1334 audit(1768891158.595:583): prog-id=175 op=LOAD Jan 20 06:39:18.595000 audit: BPF prog-id=175 op=LOAD Jan 20 06:39:18.595000 audit[4390]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f4488 a2=98 a3=0 items=0 ppid=3422 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:18.659881 kernel: audit: type=1300 audit(1768891158.595:583): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f4488 a2=98 a3=0 items=0 ppid=3422 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:18.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636664326238623136663239316231636462643163366466393961 Jan 20 06:39:18.713715 kernel: audit: type=1327 audit(1768891158.595:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636664326238623136663239316231636462643163366466393961 Jan 20 06:39:18.713852 kernel: audit: type=1334 audit(1768891158.608:584): prog-id=176 op=LOAD Jan 20 06:39:18.608000 audit: BPF prog-id=176 op=LOAD Jan 20 06:39:18.608000 audit[4390]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001f4218 a2=98 a3=0 items=0 ppid=3422 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:18.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636664326238623136663239316231636462643163366466393961 Jan 20 06:39:18.800883 kernel: audit: type=1300 audit(1768891158.608:584): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001f4218 a2=98 a3=0 items=0 ppid=3422 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:18.801518 kernel: audit: type=1327 audit(1768891158.608:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636664326238623136663239316231636462643163366466393961 Jan 20 06:39:18.608000 audit: BPF prog-id=176 op=UNLOAD Jan 20 06:39:18.608000 audit[4390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:18.865800 kernel: audit: type=1334 audit(1768891158.608:585): prog-id=176 op=UNLOAD Jan 20 06:39:18.866484 kernel: audit: type=1300 audit(1768891158.608:585): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:18.866533 kernel: audit: type=1327 audit(1768891158.608:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636664326238623136663239316231636462643163366466393961 Jan 20 06:39:18.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636664326238623136663239316231636462643163366466393961 Jan 20 06:39:18.608000 audit: BPF prog-id=175 op=UNLOAD Jan 20 06:39:18.932404 kernel: audit: type=1334 audit(1768891158.608:586): prog-id=175 op=UNLOAD Jan 20 06:39:18.608000 audit[4390]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:18.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636664326238623136663239316231636462643163366466393961 Jan 20 06:39:18.608000 audit: BPF prog-id=177 op=LOAD Jan 20 06:39:18.608000 audit[4390]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f46e8 a2=98 a3=0 items=0 ppid=3422 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:18.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636664326238623136663239316231636462643163366466393961 Jan 20 06:39:18.989371 containerd[1645]: time="2026-01-20T06:39:18.988646743Z" level=info msg="StartContainer for \"7acfd2b8b16f291b1cdbd1c6df99a1a1feb86128a121d10859f6bc64890e2e4a\" returns successfully" Jan 20 06:39:19.688463 kubelet[2865]: E0120 06:39:19.687951 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:19.798780 kubelet[2865]: I0120 06:39:19.797517 2865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-68d9v" podStartSLOduration=3.592228598 podStartE2EDuration="54.792924246s" podCreationTimestamp="2026-01-20 06:38:25 +0000 UTC" firstStartedPulling="2026-01-20 06:38:26.555287898 +0000 UTC m=+31.761763941" lastFinishedPulling="2026-01-20 06:39:17.755983557 +0000 UTC m=+82.962459589" observedRunningTime="2026-01-20 06:39:19.780975112 +0000 UTC m=+84.987451145" watchObservedRunningTime="2026-01-20 06:39:19.792924246 +0000 UTC m=+84.999400279" Jan 20 06:39:19.815331 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 06:39:19.815548 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 06:39:20.078674 containerd[1645]: time="2026-01-20T06:39:20.078627566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-5v8sm,Uid:8605c7f4-dda9-48f9-8faf-f356da42c13a,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:39:20.086849 containerd[1645]: time="2026-01-20T06:39:20.079853999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb7ff584c-brrnn,Uid:1b97c41d-4ead-4c93-97f0-70532331e2e7,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:39:20.704392 kubelet[2865]: E0120 06:39:20.703777 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:20.794841 containerd[1645]: time="2026-01-20T06:39:20.794489147Z" level=error msg="Failed to destroy network for sandbox \"ee7387f05c52503d748f105c031f7004bf4cfee031d6cd6eea6211a198446f7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:20.807664 systemd[1]: run-netns-cni\x2d1bf7c090\x2db425\x2d8a3f\x2d87e3\x2d03fd946323ae.mount: Deactivated successfully. Jan 20 06:39:20.813901 containerd[1645]: time="2026-01-20T06:39:20.813378467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-5v8sm,Uid:8605c7f4-dda9-48f9-8faf-f356da42c13a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee7387f05c52503d748f105c031f7004bf4cfee031d6cd6eea6211a198446f7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:20.819833 kubelet[2865]: E0120 06:39:20.817423 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee7387f05c52503d748f105c031f7004bf4cfee031d6cd6eea6211a198446f7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:20.819833 kubelet[2865]: E0120 06:39:20.817482 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee7387f05c52503d748f105c031f7004bf4cfee031d6cd6eea6211a198446f7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" Jan 20 06:39:20.819833 kubelet[2865]: E0120 06:39:20.817504 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee7387f05c52503d748f105c031f7004bf4cfee031d6cd6eea6211a198446f7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" Jan 20 06:39:20.820021 kubelet[2865]: E0120 06:39:20.817540 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8db8dd5b-5v8sm_calico-apiserver(8605c7f4-dda9-48f9-8faf-f356da42c13a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8db8dd5b-5v8sm_calico-apiserver(8605c7f4-dda9-48f9-8faf-f356da42c13a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee7387f05c52503d748f105c031f7004bf4cfee031d6cd6eea6211a198446f7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:39:20.868780 containerd[1645]: time="2026-01-20T06:39:20.867784389Z" level=error msg="Failed to destroy network for sandbox \"ced36a7c0cfef26374e308a80ce8f93934029e73b73faa2bb953dfbf7a7c23ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:20.878960 systemd[1]: run-netns-cni\x2d32d58a50\x2d290f\x2dfa98\x2d839a\x2d4546a2288a24.mount: Deactivated successfully. Jan 20 06:39:20.903409 containerd[1645]: time="2026-01-20T06:39:20.898887687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb7ff584c-brrnn,Uid:1b97c41d-4ead-4c93-97f0-70532331e2e7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ced36a7c0cfef26374e308a80ce8f93934029e73b73faa2bb953dfbf7a7c23ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:20.903770 kubelet[2865]: E0120 06:39:20.901671 2865 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ced36a7c0cfef26374e308a80ce8f93934029e73b73faa2bb953dfbf7a7c23ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:39:20.903770 kubelet[2865]: E0120 06:39:20.901747 2865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ced36a7c0cfef26374e308a80ce8f93934029e73b73faa2bb953dfbf7a7c23ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" Jan 20 06:39:20.903770 kubelet[2865]: E0120 06:39:20.901777 2865 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ced36a7c0cfef26374e308a80ce8f93934029e73b73faa2bb953dfbf7a7c23ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" Jan 20 06:39:20.904430 kubelet[2865]: E0120 06:39:20.901838 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bb7ff584c-brrnn_calico-apiserver(1b97c41d-4ead-4c93-97f0-70532331e2e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bb7ff584c-brrnn_calico-apiserver(1b97c41d-4ead-4c93-97f0-70532331e2e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ced36a7c0cfef26374e308a80ce8f93934029e73b73faa2bb953dfbf7a7c23ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:39:21.096742 kubelet[2865]: E0120 06:39:21.094484 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:21.102685 containerd[1645]: time="2026-01-20T06:39:21.102644346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-728fw,Uid:fad6472f-e56c-45a1-b03c-51f4a6fda495,Namespace:kube-system,Attempt:0,}" Jan 20 06:39:21.110864 containerd[1645]: time="2026-01-20T06:39:21.108728463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-grqpc,Uid:1d1bd19b-efe8-47e1-8a7a-7256f246c0d1,Namespace:calico-system,Attempt:0,}" Jan 20 06:39:21.112638 containerd[1645]: time="2026-01-20T06:39:21.108755073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-nqfrx,Uid:fdd5baaa-865a-43eb-a3a6-626c707ee467,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:39:21.330582 kubelet[2865]: I0120 06:39:21.330021 2865 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6d1208c8-db25-4d18-a483-24a1de720368-whisker-backend-key-pair\") pod \"6d1208c8-db25-4d18-a483-24a1de720368\" (UID: \"6d1208c8-db25-4d18-a483-24a1de720368\") " Jan 20 06:39:21.330582 kubelet[2865]: I0120 06:39:21.330556 2865 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d1208c8-db25-4d18-a483-24a1de720368-whisker-ca-bundle\") pod \"6d1208c8-db25-4d18-a483-24a1de720368\" (UID: \"6d1208c8-db25-4d18-a483-24a1de720368\") " Jan 20 06:39:21.330789 kubelet[2865]: I0120 06:39:21.330606 2865 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npjsv\" (UniqueName: \"kubernetes.io/projected/6d1208c8-db25-4d18-a483-24a1de720368-kube-api-access-npjsv\") pod \"6d1208c8-db25-4d18-a483-24a1de720368\" (UID: \"6d1208c8-db25-4d18-a483-24a1de720368\") " Jan 20 06:39:21.356896 kubelet[2865]: I0120 06:39:21.351909 2865 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d1208c8-db25-4d18-a483-24a1de720368-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6d1208c8-db25-4d18-a483-24a1de720368" (UID: "6d1208c8-db25-4d18-a483-24a1de720368"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 06:39:21.400521 systemd[1]: var-lib-kubelet-pods-6d1208c8\x2ddb25\x2d4d18\x2da483\x2d24a1de720368-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 06:39:21.405504 kubelet[2865]: I0120 06:39:21.404821 2865 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1208c8-db25-4d18-a483-24a1de720368-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6d1208c8-db25-4d18-a483-24a1de720368" (UID: "6d1208c8-db25-4d18-a483-24a1de720368"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 06:39:21.419623 systemd[1]: var-lib-kubelet-pods-6d1208c8\x2ddb25\x2d4d18\x2da483\x2d24a1de720368-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnpjsv.mount: Deactivated successfully. Jan 20 06:39:21.424835 kubelet[2865]: I0120 06:39:21.423303 2865 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d1208c8-db25-4d18-a483-24a1de720368-kube-api-access-npjsv" (OuterVolumeSpecName: "kube-api-access-npjsv") pod "6d1208c8-db25-4d18-a483-24a1de720368" (UID: "6d1208c8-db25-4d18-a483-24a1de720368"). InnerVolumeSpecName "kube-api-access-npjsv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 06:39:21.435609 kubelet[2865]: I0120 06:39:21.434693 2865 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6d1208c8-db25-4d18-a483-24a1de720368-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 06:39:21.435609 kubelet[2865]: I0120 06:39:21.434851 2865 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d1208c8-db25-4d18-a483-24a1de720368-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 06:39:21.435609 kubelet[2865]: I0120 06:39:21.434868 2865 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-npjsv\" (UniqueName: \"kubernetes.io/projected/6d1208c8-db25-4d18-a483-24a1de720368-kube-api-access-npjsv\") on node \"localhost\" DevicePath \"\"" Jan 20 06:39:21.755583 systemd[1]: Removed slice kubepods-besteffort-pod6d1208c8_db25_4d18_a483_24a1de720368.slice - libcontainer container kubepods-besteffort-pod6d1208c8_db25_4d18_a483_24a1de720368.slice. Jan 20 06:39:22.220971 systemd[1]: Created slice kubepods-besteffort-pod85a3d7fc_92d2_477e_a3c6_cf998fc60fae.slice - libcontainer container kubepods-besteffort-pod85a3d7fc_92d2_477e_a3c6_cf998fc60fae.slice. Jan 20 06:39:22.374745 kubelet[2865]: I0120 06:39:22.374674 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85a3d7fc-92d2-477e-a3c6-cf998fc60fae-whisker-ca-bundle\") pod \"whisker-7688649cc6-vz554\" (UID: \"85a3d7fc-92d2-477e-a3c6-cf998fc60fae\") " pod="calico-system/whisker-7688649cc6-vz554" Jan 20 06:39:22.378779 kubelet[2865]: I0120 06:39:22.378639 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx8m5\" (UniqueName: \"kubernetes.io/projected/85a3d7fc-92d2-477e-a3c6-cf998fc60fae-kube-api-access-cx8m5\") pod \"whisker-7688649cc6-vz554\" (UID: \"85a3d7fc-92d2-477e-a3c6-cf998fc60fae\") " pod="calico-system/whisker-7688649cc6-vz554" Jan 20 06:39:22.378779 kubelet[2865]: I0120 06:39:22.378688 2865 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85a3d7fc-92d2-477e-a3c6-cf998fc60fae-whisker-backend-key-pair\") pod \"whisker-7688649cc6-vz554\" (UID: \"85a3d7fc-92d2-477e-a3c6-cf998fc60fae\") " pod="calico-system/whisker-7688649cc6-vz554" Jan 20 06:39:22.858453 containerd[1645]: time="2026-01-20T06:39:22.857924466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7688649cc6-vz554,Uid:85a3d7fc-92d2-477e-a3c6-cf998fc60fae,Namespace:calico-system,Attempt:0,}" Jan 20 06:39:23.106768 kubelet[2865]: I0120 06:39:23.106500 2865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d1208c8-db25-4d18-a483-24a1de720368" path="/var/lib/kubelet/pods/6d1208c8-db25-4d18-a483-24a1de720368/volumes" Jan 20 06:39:23.340780 systemd-networkd[1524]: cali19f27ba3f1e: Link UP Jan 20 06:39:23.344945 systemd-networkd[1524]: cali19f27ba3f1e: Gained carrier Jan 20 06:39:23.474826 containerd[1645]: 2026-01-20 06:39:21.731 [INFO][4552] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 06:39:23.474826 containerd[1645]: 2026-01-20 06:39:22.022 [INFO][4552] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0 calico-apiserver-6f8db8dd5b- calico-apiserver fdd5baaa-865a-43eb-a3a6-626c707ee467 941 0 2026-01-20 06:38:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f8db8dd5b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f8db8dd5b-nqfrx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali19f27ba3f1e [] [] }} ContainerID="d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-nqfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-" Jan 20 06:39:23.474826 containerd[1645]: 2026-01-20 06:39:22.022 [INFO][4552] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-nqfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0" Jan 20 06:39:23.474826 containerd[1645]: 2026-01-20 06:39:22.849 [INFO][4600] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" HandleID="k8s-pod-network.d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" Workload="localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0" Jan 20 06:39:23.475639 containerd[1645]: 2026-01-20 06:39:22.852 [INFO][4600] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" HandleID="k8s-pod-network.d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" Workload="localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000193e80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f8db8dd5b-nqfrx", "timestamp":"2026-01-20 06:39:22.849454882 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:39:23.475639 containerd[1645]: 2026-01-20 06:39:22.852 [INFO][4600] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:39:23.475639 containerd[1645]: 2026-01-20 06:39:22.852 [INFO][4600] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:39:23.475639 containerd[1645]: 2026-01-20 06:39:22.856 [INFO][4600] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 06:39:23.475639 containerd[1645]: 2026-01-20 06:39:22.935 [INFO][4600] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" host="localhost" Jan 20 06:39:23.475639 containerd[1645]: 2026-01-20 06:39:23.002 [INFO][4600] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 06:39:23.475639 containerd[1645]: 2026-01-20 06:39:23.072 [INFO][4600] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 06:39:23.475639 containerd[1645]: 2026-01-20 06:39:23.126 [INFO][4600] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:23.475639 containerd[1645]: 2026-01-20 06:39:23.134 [INFO][4600] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:23.475639 containerd[1645]: 2026-01-20 06:39:23.134 [INFO][4600] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" host="localhost" Jan 20 06:39:23.476724 containerd[1645]: 2026-01-20 06:39:23.144 [INFO][4600] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e Jan 20 06:39:23.476724 containerd[1645]: 2026-01-20 06:39:23.172 [INFO][4600] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" host="localhost" Jan 20 06:39:23.476724 containerd[1645]: 2026-01-20 06:39:23.210 [INFO][4600] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" host="localhost" Jan 20 06:39:23.476724 containerd[1645]: 2026-01-20 06:39:23.211 [INFO][4600] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" host="localhost" Jan 20 06:39:23.476724 containerd[1645]: 2026-01-20 06:39:23.212 [INFO][4600] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:39:23.476724 containerd[1645]: 2026-01-20 06:39:23.212 [INFO][4600] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" HandleID="k8s-pod-network.d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" Workload="localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0" Jan 20 06:39:23.481002 containerd[1645]: 2026-01-20 06:39:23.240 [INFO][4552] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-nqfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0", GenerateName:"calico-apiserver-6f8db8dd5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdd5baaa-865a-43eb-a3a6-626c707ee467", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8db8dd5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f8db8dd5b-nqfrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19f27ba3f1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:23.481690 containerd[1645]: 2026-01-20 06:39:23.243 [INFO][4552] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-nqfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0" Jan 20 06:39:23.481690 containerd[1645]: 2026-01-20 06:39:23.243 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19f27ba3f1e ContainerID="d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-nqfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0" Jan 20 06:39:23.481690 containerd[1645]: 2026-01-20 06:39:23.355 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-nqfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0" Jan 20 06:39:23.481801 containerd[1645]: 2026-01-20 06:39:23.357 [INFO][4552] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-nqfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0", GenerateName:"calico-apiserver-6f8db8dd5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"fdd5baaa-865a-43eb-a3a6-626c707ee467", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8db8dd5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e", Pod:"calico-apiserver-6f8db8dd5b-nqfrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19f27ba3f1e", MAC:"fa:68:fe:ea:47:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:23.482436 containerd[1645]: 2026-01-20 06:39:23.447 [INFO][4552] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-nqfrx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--nqfrx-eth0" Jan 20 06:39:23.696872 systemd-networkd[1524]: caliba427bbd6cf: Link UP Jan 20 06:39:23.715857 systemd-networkd[1524]: caliba427bbd6cf: Gained carrier Jan 20 06:39:23.826359 containerd[1645]: 2026-01-20 06:39:21.756 [INFO][4551] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 06:39:23.826359 containerd[1645]: 2026-01-20 06:39:22.028 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--grqpc-eth0 goldmane-666569f655- calico-system 1d1bd19b-efe8-47e1-8a7a-7256f246c0d1 943 0 2026-01-20 06:38:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-grqpc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliba427bbd6cf [] [] }} ContainerID="35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" Namespace="calico-system" Pod="goldmane-666569f655-grqpc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--grqpc-" Jan 20 06:39:23.826359 containerd[1645]: 2026-01-20 06:39:22.030 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" Namespace="calico-system" Pod="goldmane-666569f655-grqpc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--grqpc-eth0" Jan 20 06:39:23.826359 containerd[1645]: 2026-01-20 06:39:22.848 [INFO][4597] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" HandleID="k8s-pod-network.35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" Workload="localhost-k8s-goldmane--666569f655--grqpc-eth0" Jan 20 06:39:23.826900 containerd[1645]: 2026-01-20 06:39:22.851 [INFO][4597] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" HandleID="k8s-pod-network.35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" Workload="localhost-k8s-goldmane--666569f655--grqpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00049afa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-grqpc", "timestamp":"2026-01-20 06:39:22.848505077 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:39:23.826900 containerd[1645]: 2026-01-20 06:39:22.853 [INFO][4597] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:39:23.826900 containerd[1645]: 2026-01-20 06:39:23.216 [INFO][4597] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:39:23.826900 containerd[1645]: 2026-01-20 06:39:23.216 [INFO][4597] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 06:39:23.826900 containerd[1645]: 2026-01-20 06:39:23.266 [INFO][4597] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" host="localhost" Jan 20 06:39:23.826900 containerd[1645]: 2026-01-20 06:39:23.346 [INFO][4597] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 06:39:23.826900 containerd[1645]: 2026-01-20 06:39:23.385 [INFO][4597] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 06:39:23.826900 containerd[1645]: 2026-01-20 06:39:23.410 [INFO][4597] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:23.826900 containerd[1645]: 2026-01-20 06:39:23.454 [INFO][4597] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:23.826900 containerd[1645]: 2026-01-20 06:39:23.455 [INFO][4597] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" host="localhost" Jan 20 06:39:23.828019 containerd[1645]: 2026-01-20 06:39:23.503 [INFO][4597] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14 Jan 20 06:39:23.828019 containerd[1645]: 2026-01-20 06:39:23.541 [INFO][4597] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" host="localhost" Jan 20 06:39:23.828019 containerd[1645]: 2026-01-20 06:39:23.602 [INFO][4597] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" host="localhost" Jan 20 06:39:23.828019 containerd[1645]: 2026-01-20 06:39:23.603 [INFO][4597] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" host="localhost" Jan 20 06:39:23.828019 containerd[1645]: 2026-01-20 06:39:23.603 [INFO][4597] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:39:23.828019 containerd[1645]: 2026-01-20 06:39:23.603 [INFO][4597] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" HandleID="k8s-pod-network.35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" Workload="localhost-k8s-goldmane--666569f655--grqpc-eth0" Jan 20 06:39:23.843834 containerd[1645]: 2026-01-20 06:39:23.644 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" Namespace="calico-system" Pod="goldmane-666569f655-grqpc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--grqpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--grqpc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1d1bd19b-efe8-47e1-8a7a-7256f246c0d1", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-grqpc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba427bbd6cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:23.843834 containerd[1645]: 2026-01-20 06:39:23.647 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" Namespace="calico-system" Pod="goldmane-666569f655-grqpc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--grqpc-eth0" Jan 20 06:39:23.850811 containerd[1645]: 2026-01-20 06:39:23.648 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba427bbd6cf ContainerID="35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" Namespace="calico-system" Pod="goldmane-666569f655-grqpc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--grqpc-eth0" Jan 20 06:39:23.850811 containerd[1645]: 2026-01-20 06:39:23.719 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" Namespace="calico-system" Pod="goldmane-666569f655-grqpc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--grqpc-eth0" Jan 20 06:39:23.853475 containerd[1645]: 2026-01-20 06:39:23.728 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" Namespace="calico-system" Pod="goldmane-666569f655-grqpc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--grqpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--grqpc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1d1bd19b-efe8-47e1-8a7a-7256f246c0d1", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14", Pod:"goldmane-666569f655-grqpc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba427bbd6cf", MAC:"e2:fe:bb:fe:25:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:23.853788 containerd[1645]: 2026-01-20 06:39:23.793 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" Namespace="calico-system" Pod="goldmane-666569f655-grqpc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--grqpc-eth0" Jan 20 06:39:23.999584 containerd[1645]: time="2026-01-20T06:39:23.991874853Z" level=info msg="connecting to shim d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e" address="unix:///run/containerd/s/17c20ba379adce91e3a9e0429c0ccb2640aaa14b4b4b2611b516f314a4245078" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:24.147996 containerd[1645]: time="2026-01-20T06:39:24.145558551Z" level=info msg="connecting to shim 35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14" address="unix:///run/containerd/s/d8f5f86de343670f77c261b51d480c783a6706bb12391aa3781da2f5b63ed779" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:24.177920 systemd-networkd[1524]: calida1a4551219: Link UP Jan 20 06:39:24.182637 systemd-networkd[1524]: calida1a4551219: Gained carrier Jan 20 06:39:24.339773 containerd[1645]: 2026-01-20 06:39:21.705 [INFO][4549] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 06:39:24.339773 containerd[1645]: 2026-01-20 06:39:22.022 [INFO][4549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--728fw-eth0 coredns-668d6bf9bc- kube-system fad6472f-e56c-45a1-b03c-51f4a6fda495 945 0 2026-01-20 06:37:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-728fw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida1a4551219 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" Namespace="kube-system" Pod="coredns-668d6bf9bc-728fw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--728fw-" Jan 20 06:39:24.339773 containerd[1645]: 2026-01-20 06:39:22.022 [INFO][4549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" Namespace="kube-system" Pod="coredns-668d6bf9bc-728fw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--728fw-eth0" Jan 20 06:39:24.339773 containerd[1645]: 2026-01-20 06:39:22.848 [INFO][4596] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" HandleID="k8s-pod-network.4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" Workload="localhost-k8s-coredns--668d6bf9bc--728fw-eth0" Jan 20 06:39:24.341849 containerd[1645]: 2026-01-20 06:39:22.854 [INFO][4596] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" HandleID="k8s-pod-network.4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" Workload="localhost-k8s-coredns--668d6bf9bc--728fw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037e3f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-728fw", "timestamp":"2026-01-20 06:39:22.848611289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:39:24.341849 containerd[1645]: 2026-01-20 06:39:22.855 [INFO][4596] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:39:24.341849 containerd[1645]: 2026-01-20 06:39:23.604 [INFO][4596] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:39:24.341849 containerd[1645]: 2026-01-20 06:39:23.605 [INFO][4596] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 06:39:24.341849 containerd[1645]: 2026-01-20 06:39:23.665 [INFO][4596] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" host="localhost" Jan 20 06:39:24.341849 containerd[1645]: 2026-01-20 06:39:23.763 [INFO][4596] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 06:39:24.341849 containerd[1645]: 2026-01-20 06:39:23.885 [INFO][4596] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 06:39:24.341849 containerd[1645]: 2026-01-20 06:39:23.901 [INFO][4596] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:24.341849 containerd[1645]: 2026-01-20 06:39:23.917 [INFO][4596] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:24.341849 containerd[1645]: 2026-01-20 06:39:23.917 [INFO][4596] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" host="localhost" Jan 20 06:39:24.342955 containerd[1645]: 2026-01-20 06:39:23.928 [INFO][4596] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606 Jan 20 06:39:24.342955 containerd[1645]: 2026-01-20 06:39:23.989 [INFO][4596] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" host="localhost" Jan 20 06:39:24.342955 containerd[1645]: 2026-01-20 06:39:24.086 [INFO][4596] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" host="localhost" Jan 20 06:39:24.342955 containerd[1645]: 2026-01-20 06:39:24.086 [INFO][4596] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" host="localhost" Jan 20 06:39:24.342955 containerd[1645]: 2026-01-20 06:39:24.087 [INFO][4596] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:39:24.342955 containerd[1645]: 2026-01-20 06:39:24.087 [INFO][4596] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" HandleID="k8s-pod-network.4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" Workload="localhost-k8s-coredns--668d6bf9bc--728fw-eth0" Jan 20 06:39:24.352775 containerd[1645]: 2026-01-20 06:39:24.162 [INFO][4549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" Namespace="kube-system" Pod="coredns-668d6bf9bc-728fw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--728fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--728fw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fad6472f-e56c-45a1-b03c-51f4a6fda495", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 37, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-728fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida1a4551219", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:24.353501 containerd[1645]: 2026-01-20 06:39:24.162 [INFO][4549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" Namespace="kube-system" Pod="coredns-668d6bf9bc-728fw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--728fw-eth0" Jan 20 06:39:24.353501 containerd[1645]: 2026-01-20 06:39:24.162 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida1a4551219 ContainerID="4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" Namespace="kube-system" Pod="coredns-668d6bf9bc-728fw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--728fw-eth0" Jan 20 06:39:24.353501 containerd[1645]: 2026-01-20 06:39:24.182 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" Namespace="kube-system" Pod="coredns-668d6bf9bc-728fw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--728fw-eth0" Jan 20 06:39:24.353735 containerd[1645]: 2026-01-20 06:39:24.188 [INFO][4549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" Namespace="kube-system" Pod="coredns-668d6bf9bc-728fw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--728fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--728fw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fad6472f-e56c-45a1-b03c-51f4a6fda495", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 37, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606", Pod:"coredns-668d6bf9bc-728fw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida1a4551219", MAC:"72:60:ad:4b:51:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:24.353735 containerd[1645]: 2026-01-20 06:39:24.269 [INFO][4549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" Namespace="kube-system" Pod="coredns-668d6bf9bc-728fw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--728fw-eth0" Jan 20 06:39:24.366953 systemd[1]: Started cri-containerd-d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e.scope - libcontainer container d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e. Jan 20 06:39:24.524703 systemd[1]: Started cri-containerd-35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14.scope - libcontainer container 35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14. Jan 20 06:39:24.540848 systemd-networkd[1524]: cali19f27ba3f1e: Gained IPv6LL Jan 20 06:39:24.565629 containerd[1645]: time="2026-01-20T06:39:24.565013953Z" level=info msg="connecting to shim 4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606" address="unix:///run/containerd/s/76016a52c699d261025d3165cdcc3701c255a6413f747c53776cdee557ce3820" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:24.628523 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 06:39:24.628641 kernel: audit: type=1334 audit(1768891164.602:588): prog-id=178 op=LOAD Jan 20 06:39:24.628681 kernel: audit: type=1334 audit(1768891164.603:589): prog-id=179 op=LOAD Jan 20 06:39:24.602000 audit: BPF prog-id=178 op=LOAD Jan 20 06:39:24.603000 audit: BPF prog-id=179 op=LOAD Jan 20 06:39:24.626485 systemd-networkd[1524]: cali822ed88ff66: Link UP Jan 20 06:39:24.628410 systemd-networkd[1524]: cali822ed88ff66: Gained carrier Jan 20 06:39:24.603000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4682 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.690503 kernel: audit: type=1300 audit(1768891164.603:589): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4682 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438303762653263363531363139663565653361353030626339363631 Jan 20 06:39:24.692978 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 06:39:24.737687 kernel: audit: type=1327 audit(1768891164.603:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438303762653263363531363139663565653361353030626339363631 Jan 20 06:39:24.603000 audit: BPF prog-id=179 op=UNLOAD Jan 20 06:39:24.603000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4682 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.803924 kernel: audit: type=1334 audit(1768891164.603:590): prog-id=179 op=UNLOAD Jan 20 06:39:24.804412 kernel: audit: type=1300 audit(1768891164.603:590): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4682 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438303762653263363531363139663565653361353030626339363631 Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:23.145 [INFO][4623] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:23.190 [INFO][4623] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7688649cc6--vz554-eth0 whisker-7688649cc6- calico-system 85a3d7fc-92d2-477e-a3c6-cf998fc60fae 1084 0 2026-01-20 06:39:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7688649cc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7688649cc6-vz554 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali822ed88ff66 [] [] }} ContainerID="ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" Namespace="calico-system" Pod="whisker-7688649cc6-vz554" WorkloadEndpoint="localhost-k8s-whisker--7688649cc6--vz554-" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:23.190 [INFO][4623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" Namespace="calico-system" Pod="whisker-7688649cc6-vz554" WorkloadEndpoint="localhost-k8s-whisker--7688649cc6--vz554-eth0" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:23.482 [INFO][4642] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" HandleID="k8s-pod-network.ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" Workload="localhost-k8s-whisker--7688649cc6--vz554-eth0" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:23.482 [INFO][4642] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" HandleID="k8s-pod-network.ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" Workload="localhost-k8s-whisker--7688649cc6--vz554-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000406750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7688649cc6-vz554", "timestamp":"2026-01-20 06:39:23.482711515 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:23.482 [INFO][4642] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.088 [INFO][4642] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.089 [INFO][4642] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.218 [INFO][4642] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" host="localhost" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.315 [INFO][4642] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.399 [INFO][4642] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.427 [INFO][4642] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.459 [INFO][4642] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.464 [INFO][4642] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" host="localhost" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.480 [INFO][4642] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.524 [INFO][4642] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" host="localhost" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.574 [INFO][4642] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" host="localhost" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.574 [INFO][4642] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" host="localhost" Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.574 [INFO][4642] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:39:24.844571 containerd[1645]: 2026-01-20 06:39:24.574 [INFO][4642] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" HandleID="k8s-pod-network.ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" Workload="localhost-k8s-whisker--7688649cc6--vz554-eth0" Jan 20 06:39:24.846515 containerd[1645]: 2026-01-20 06:39:24.603 [INFO][4623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" Namespace="calico-system" Pod="whisker-7688649cc6-vz554" WorkloadEndpoint="localhost-k8s-whisker--7688649cc6--vz554-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7688649cc6--vz554-eth0", GenerateName:"whisker-7688649cc6-", Namespace:"calico-system", SelfLink:"", UID:"85a3d7fc-92d2-477e-a3c6-cf998fc60fae", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 39, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7688649cc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7688649cc6-vz554", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali822ed88ff66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:24.846515 containerd[1645]: 2026-01-20 06:39:24.609 [INFO][4623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" Namespace="calico-system" Pod="whisker-7688649cc6-vz554" WorkloadEndpoint="localhost-k8s-whisker--7688649cc6--vz554-eth0" Jan 20 06:39:24.846515 containerd[1645]: 2026-01-20 06:39:24.610 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali822ed88ff66 ContainerID="ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" Namespace="calico-system" Pod="whisker-7688649cc6-vz554" WorkloadEndpoint="localhost-k8s-whisker--7688649cc6--vz554-eth0" Jan 20 06:39:24.846515 containerd[1645]: 2026-01-20 06:39:24.627 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" Namespace="calico-system" Pod="whisker-7688649cc6-vz554" WorkloadEndpoint="localhost-k8s-whisker--7688649cc6--vz554-eth0" Jan 20 06:39:24.846515 containerd[1645]: 2026-01-20 06:39:24.666 [INFO][4623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" Namespace="calico-system" Pod="whisker-7688649cc6-vz554" WorkloadEndpoint="localhost-k8s-whisker--7688649cc6--vz554-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7688649cc6--vz554-eth0", GenerateName:"whisker-7688649cc6-", Namespace:"calico-system", SelfLink:"", UID:"85a3d7fc-92d2-477e-a3c6-cf998fc60fae", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 39, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7688649cc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d", Pod:"whisker-7688649cc6-vz554", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali822ed88ff66", MAC:"9e:2b:5a:39:11:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:24.846515 containerd[1645]: 2026-01-20 06:39:24.749 [INFO][4623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" Namespace="calico-system" Pod="whisker-7688649cc6-vz554" WorkloadEndpoint="localhost-k8s-whisker--7688649cc6--vz554-eth0" Jan 20 06:39:24.853436 kernel: audit: type=1327 audit(1768891164.603:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438303762653263363531363139663565653361353030626339363631 Jan 20 06:39:24.853488 kernel: audit: type=1334 audit(1768891164.611:591): prog-id=180 op=LOAD Jan 20 06:39:24.611000 audit: BPF prog-id=180 op=LOAD Jan 20 06:39:24.611000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4682 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.920577 kernel: audit: type=1300 audit(1768891164.611:591): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4682 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.962538 kernel: audit: type=1327 audit(1768891164.611:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438303762653263363531363139663565653361353030626339363631 Jan 20 06:39:24.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438303762653263363531363139663565653361353030626339363631 Jan 20 06:39:24.972977 systemd[1]: Started cri-containerd-4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606.scope - libcontainer container 4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606. Jan 20 06:39:24.616000 audit: BPF prog-id=181 op=LOAD Jan 20 06:39:24.616000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4682 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438303762653263363531363139663565653361353030626339363631 Jan 20 06:39:24.616000 audit: BPF prog-id=181 op=UNLOAD Jan 20 06:39:24.616000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4682 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438303762653263363531363139663565653361353030626339363631 Jan 20 06:39:24.616000 audit: BPF prog-id=180 op=UNLOAD Jan 20 06:39:24.616000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4682 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438303762653263363531363139663565653361353030626339363631 Jan 20 06:39:24.616000 audit: BPF prog-id=182 op=LOAD Jan 20 06:39:24.616000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4682 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438303762653263363531363139663565653361353030626339363631 Jan 20 06:39:24.662000 audit: BPF prog-id=183 op=LOAD Jan 20 06:39:24.667000 audit: BPF prog-id=184 op=LOAD Jan 20 06:39:24.667000 audit[4719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00021a238 a2=98 a3=0 items=0 ppid=4699 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656231333137366230653735306436653165303666356364333635 Jan 20 06:39:24.667000 audit: BPF prog-id=184 op=UNLOAD Jan 20 06:39:24.667000 audit[4719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4699 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656231333137366230653735306436653165303666356364333635 Jan 20 06:39:24.667000 audit: BPF prog-id=185 op=LOAD Jan 20 06:39:24.667000 audit[4719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00021a488 a2=98 a3=0 items=0 ppid=4699 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656231333137366230653735306436653165303666356364333635 Jan 20 06:39:24.667000 audit: BPF prog-id=186 op=LOAD Jan 20 06:39:24.667000 audit[4719]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00021a218 a2=98 a3=0 items=0 ppid=4699 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656231333137366230653735306436653165303666356364333635 Jan 20 06:39:24.667000 audit: BPF prog-id=186 op=UNLOAD Jan 20 06:39:24.667000 audit[4719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4699 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656231333137366230653735306436653165303666356364333635 Jan 20 06:39:24.667000 audit: BPF prog-id=185 op=UNLOAD Jan 20 06:39:24.667000 audit[4719]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4699 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656231333137366230653735306436653165303666356364333635 Jan 20 06:39:24.667000 audit: BPF prog-id=187 op=LOAD Jan 20 06:39:24.667000 audit[4719]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00021a6e8 a2=98 a3=0 items=0 ppid=4699 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:24.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335656231333137366230653735306436653165303666356364333635 Jan 20 06:39:24.986495 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 06:39:25.126657 containerd[1645]: time="2026-01-20T06:39:25.126606377Z" level=info msg="connecting to shim ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d" address="unix:///run/containerd/s/98d540ad9c439e57c968559b39e8021da26bdc08f8bd262d655edb5ae0156f5e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:25.210000 audit: BPF prog-id=188 op=LOAD Jan 20 06:39:25.212000 audit: BPF prog-id=189 op=LOAD Jan 20 06:39:25.212000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4793 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464393963326164336231346637373639613133633532643962636366 Jan 20 06:39:25.212000 audit: BPF prog-id=189 op=UNLOAD Jan 20 06:39:25.212000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464393963326164336231346637373639613133633532643962636366 Jan 20 06:39:25.221000 audit: BPF prog-id=190 op=LOAD Jan 20 06:39:25.221000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4793 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464393963326164336231346637373639613133633532643962636366 Jan 20 06:39:25.222000 audit: BPF prog-id=191 op=LOAD Jan 20 06:39:25.222000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4793 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464393963326164336231346637373639613133633532643962636366 Jan 20 06:39:25.222000 audit: BPF prog-id=191 op=UNLOAD Jan 20 06:39:25.222000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464393963326164336231346637373639613133633532643962636366 Jan 20 06:39:25.222000 audit: BPF prog-id=190 op=UNLOAD Jan 20 06:39:25.222000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464393963326164336231346637373639613133633532643962636366 Jan 20 06:39:25.222000 audit: BPF prog-id=192 op=LOAD Jan 20 06:39:25.222000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4793 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.222000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464393963326164336231346637373639613133633532643962636366 Jan 20 06:39:25.229855 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 06:39:25.232612 containerd[1645]: time="2026-01-20T06:39:25.232465910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-nqfrx,Uid:fdd5baaa-865a-43eb-a3a6-626c707ee467,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d807be2c651619f5ee3a500bc9661b41d0234a8669d22b7880fb2922aac7a53e\"" Jan 20 06:39:25.264648 containerd[1645]: time="2026-01-20T06:39:25.258351739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:39:25.410841 systemd[1]: Started cri-containerd-ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d.scope - libcontainer container ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d. Jan 20 06:39:25.436462 systemd-networkd[1524]: caliba427bbd6cf: Gained IPv6LL Jan 20 06:39:25.503773 containerd[1645]: time="2026-01-20T06:39:25.501612644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-728fw,Uid:fad6472f-e56c-45a1-b03c-51f4a6fda495,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606\"" Jan 20 06:39:25.515010 kubelet[2865]: E0120 06:39:25.514352 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:25.734000 audit: BPF prog-id=193 op=LOAD Jan 20 06:39:25.737000 audit: BPF prog-id=194 op=LOAD Jan 20 06:39:25.737000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4903 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563623231336332393162326334306537636235653732303437316661 Jan 20 06:39:25.738000 audit: BPF prog-id=194 op=UNLOAD Jan 20 06:39:25.738000 audit[4920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4903 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563623231336332393162326334306537636235653732303437316661 Jan 20 06:39:25.739000 audit: BPF prog-id=195 op=LOAD Jan 20 06:39:25.739000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4903 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563623231336332393162326334306537636235653732303437316661 Jan 20 06:39:25.739000 audit: BPF prog-id=196 op=LOAD Jan 20 06:39:25.739000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4903 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563623231336332393162326334306537636235653732303437316661 Jan 20 06:39:25.740000 audit: BPF prog-id=196 op=UNLOAD Jan 20 06:39:25.740000 audit[4920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4903 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563623231336332393162326334306537636235653732303437316661 Jan 20 06:39:25.740000 audit: BPF prog-id=195 op=UNLOAD Jan 20 06:39:25.740000 audit[4920]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4903 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563623231336332393162326334306537636235653732303437316661 Jan 20 06:39:25.742000 audit: BPF prog-id=197 op=LOAD Jan 20 06:39:25.742000 audit[4920]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4903 pid=4920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:25.742000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563623231336332393162326334306537636235653732303437316661 Jan 20 06:39:25.744742 containerd[1645]: time="2026-01-20T06:39:25.744514935Z" level=info msg="CreateContainer within sandbox \"4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 06:39:25.759668 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 06:39:25.778451 containerd[1645]: time="2026-01-20T06:39:25.778279790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:25.796262 containerd[1645]: time="2026-01-20T06:39:25.793493447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-grqpc,Uid:1d1bd19b-efe8-47e1-8a7a-7256f246c0d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"35eb13176b0e750d6e1e06f5cd365a521b11991aade78ad39d7ca16f33f8fe14\"" Jan 20 06:39:25.820368 containerd[1645]: time="2026-01-20T06:39:25.820301625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:39:25.820523 containerd[1645]: time="2026-01-20T06:39:25.820408615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:25.827577 kubelet[2865]: E0120 06:39:25.826892 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:25.841420 kubelet[2865]: E0120 06:39:25.840837 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:25.844990 kubelet[2865]: E0120 06:39:25.844640 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdvt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f8db8dd5b-nqfrx_calico-apiserver(fdd5baaa-865a-43eb-a3a6-626c707ee467): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:25.846593 kubelet[2865]: E0120 06:39:25.846565 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:39:25.862676 containerd[1645]: time="2026-01-20T06:39:25.861005890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:39:25.905013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900186067.mount: Deactivated successfully. Jan 20 06:39:25.973879 containerd[1645]: time="2026-01-20T06:39:25.971984362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:25.982752 containerd[1645]: time="2026-01-20T06:39:25.982582821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:39:25.982888 containerd[1645]: time="2026-01-20T06:39:25.982791520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:25.991503 containerd[1645]: time="2026-01-20T06:39:25.984392657Z" level=info msg="Container 289981f3b55540b048a3aa0c075529973bca7f29d0dc36e111449ed6a36ae247: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:39:25.986937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1708613106.mount: Deactivated successfully. Jan 20 06:39:25.991707 kubelet[2865]: E0120 06:39:25.984663 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:39:25.991707 kubelet[2865]: E0120 06:39:25.984726 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:39:25.991707 kubelet[2865]: E0120 06:39:25.984881 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g29tx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-grqpc_calico-system(1d1bd19b-efe8-47e1-8a7a-7256f246c0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:25.991707 kubelet[2865]: E0120 06:39:25.990526 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:39:26.061477 containerd[1645]: time="2026-01-20T06:39:26.060604708Z" level=info msg="CreateContainer within sandbox \"4d99c2ad3b14f7769a13c52d9bccff8d13e5cf15e5f8c575184371b60a0e4606\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"289981f3b55540b048a3aa0c075529973bca7f29d0dc36e111449ed6a36ae247\"" Jan 20 06:39:26.067489 containerd[1645]: time="2026-01-20T06:39:26.066933432Z" level=info msg="StartContainer for \"289981f3b55540b048a3aa0c075529973bca7f29d0dc36e111449ed6a36ae247\"" Jan 20 06:39:26.084785 containerd[1645]: time="2026-01-20T06:39:26.084547600Z" level=info msg="connecting to shim 289981f3b55540b048a3aa0c075529973bca7f29d0dc36e111449ed6a36ae247" address="unix:///run/containerd/s/76016a52c699d261025d3165cdcc3701c255a6413f747c53776cdee557ce3820" protocol=ttrpc version=3 Jan 20 06:39:26.205688 systemd-networkd[1524]: calida1a4551219: Gained IPv6LL Jan 20 06:39:26.270449 containerd[1645]: time="2026-01-20T06:39:26.269388034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7688649cc6-vz554,Uid:85a3d7fc-92d2-477e-a3c6-cf998fc60fae,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecb213c291b2c40e7cb5e720471fa79fdb6308861c933ecc2734ff361a76421d\"" Jan 20 06:39:26.286628 systemd[1]: Started cri-containerd-289981f3b55540b048a3aa0c075529973bca7f29d0dc36e111449ed6a36ae247.scope - libcontainer container 289981f3b55540b048a3aa0c075529973bca7f29d0dc36e111449ed6a36ae247. Jan 20 06:39:26.297873 containerd[1645]: time="2026-01-20T06:39:26.297560013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:39:26.382000 audit: BPF prog-id=198 op=LOAD Jan 20 06:39:26.385000 audit: BPF prog-id=199 op=LOAD Jan 20 06:39:26.385000 audit[4962]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4793 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:26.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393938316633623535353430623034386133616130633037353532 Jan 20 06:39:26.385000 audit: BPF prog-id=199 op=UNLOAD Jan 20 06:39:26.385000 audit[4962]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:26.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393938316633623535353430623034386133616130633037353532 Jan 20 06:39:26.388000 audit: BPF prog-id=200 op=LOAD Jan 20 06:39:26.388000 audit[4962]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4793 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:26.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393938316633623535353430623034386133616130633037353532 Jan 20 06:39:26.388000 audit: BPF prog-id=201 op=LOAD Jan 20 06:39:26.388000 audit[4962]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4793 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:26.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393938316633623535353430623034386133616130633037353532 Jan 20 06:39:26.388000 audit: BPF prog-id=201 op=UNLOAD Jan 20 06:39:26.388000 audit[4962]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:26.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393938316633623535353430623034386133616130633037353532 Jan 20 06:39:26.388000 audit: BPF prog-id=200 op=UNLOAD Jan 20 06:39:26.388000 audit[4962]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4793 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:26.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393938316633623535353430623034386133616130633037353532 Jan 20 06:39:26.388000 audit: BPF prog-id=202 op=LOAD Jan 20 06:39:26.388000 audit[4962]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4793 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:26.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238393938316633623535353430623034386133616130633037353532 Jan 20 06:39:26.395496 containerd[1645]: time="2026-01-20T06:39:26.391536531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:26.404648 containerd[1645]: time="2026-01-20T06:39:26.404326677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:39:26.404648 containerd[1645]: time="2026-01-20T06:39:26.404545574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:26.408745 kubelet[2865]: E0120 06:39:26.408391 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:39:26.408745 kubelet[2865]: E0120 06:39:26.408468 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:39:26.408745 kubelet[2865]: E0120 06:39:26.408603 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8c510b35c9db4f5cba555b64598fab18,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cx8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7688649cc6-vz554_calico-system(85a3d7fc-92d2-477e-a3c6-cf998fc60fae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:26.420443 containerd[1645]: time="2026-01-20T06:39:26.419397225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:39:26.508524 containerd[1645]: time="2026-01-20T06:39:26.505727966Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:26.520824 containerd[1645]: time="2026-01-20T06:39:26.517845096Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:39:26.520988 kubelet[2865]: E0120 06:39:26.520642 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:39:26.520988 kubelet[2865]: E0120 06:39:26.520706 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:39:26.520988 kubelet[2865]: E0120 06:39:26.520836 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cx8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7688649cc6-vz554_calico-system(85a3d7fc-92d2-477e-a3c6-cf998fc60fae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:26.525666 kubelet[2865]: E0120 06:39:26.522908 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:39:26.525972 containerd[1645]: time="2026-01-20T06:39:26.522352455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:26.651711 systemd-networkd[1524]: cali822ed88ff66: Gained IPv6LL Jan 20 06:39:26.685588 containerd[1645]: time="2026-01-20T06:39:26.685451001Z" level=info msg="StartContainer for \"289981f3b55540b048a3aa0c075529973bca7f29d0dc36e111449ed6a36ae247\" returns successfully" Jan 20 06:39:26.872448 kubelet[2865]: E0120 06:39:26.871764 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:26.885866 kubelet[2865]: E0120 06:39:26.885812 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:39:26.889377 kubelet[2865]: E0120 06:39:26.888635 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:39:26.892985 kubelet[2865]: E0120 06:39:26.892959 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:39:27.015689 kubelet[2865]: I0120 06:39:27.013588 2865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-728fw" podStartSLOduration=88.013570179 podStartE2EDuration="1m28.013570179s" podCreationTimestamp="2026-01-20 06:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:39:26.918758342 +0000 UTC m=+92.125234375" watchObservedRunningTime="2026-01-20 06:39:27.013570179 +0000 UTC m=+92.220046212" Jan 20 06:39:27.092640 kubelet[2865]: E0120 06:39:27.092503 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:27.411000 audit[5024]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5024 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:27.411000 audit[5024]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff7ddbb350 a2=0 a3=7fff7ddbb33c items=0 ppid=2978 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:27.417000 audit[5024]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=5024 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:27.417000 audit[5024]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff7ddbb350 a2=0 a3=0 items=0 ppid=2978 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:27.687000 audit: BPF prog-id=203 op=LOAD Jan 20 06:39:27.687000 audit[5044]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0f4f2580 a2=98 a3=1fffffffffffffff items=0 ppid=4771 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.687000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:39:27.688000 audit: BPF prog-id=203 op=UNLOAD Jan 20 06:39:27.688000 audit[5044]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd0f4f2550 a3=0 items=0 ppid=4771 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.688000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:39:27.689000 audit: BPF prog-id=204 op=LOAD Jan 20 06:39:27.689000 audit[5044]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0f4f2460 a2=94 a3=3 items=0 ppid=4771 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:39:27.689000 audit: BPF prog-id=204 op=UNLOAD Jan 20 06:39:27.689000 audit[5044]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd0f4f2460 a2=94 a3=3 items=0 ppid=4771 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:39:27.689000 audit: BPF prog-id=205 op=LOAD Jan 20 06:39:27.689000 audit[5044]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0f4f24a0 a2=94 a3=7ffd0f4f2680 items=0 ppid=4771 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:39:27.689000 audit: BPF prog-id=205 op=UNLOAD Jan 20 06:39:27.689000 audit[5044]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd0f4f24a0 a2=94 a3=7ffd0f4f2680 items=0 ppid=4771 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:39:27.745000 audit: BPF prog-id=206 op=LOAD Jan 20 06:39:27.745000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc48775620 a2=98 a3=3 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.745000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:27.746000 audit: BPF prog-id=206 op=UNLOAD Jan 20 06:39:27.746000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc487755f0 a3=0 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.746000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:27.747000 audit: BPF prog-id=207 op=LOAD Jan 20 06:39:27.747000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc48775410 a2=94 a3=54428f items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:27.748000 audit: BPF prog-id=207 op=UNLOAD Jan 20 06:39:27.748000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc48775410 a2=94 a3=54428f items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.748000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:27.748000 audit: BPF prog-id=208 op=LOAD Jan 20 06:39:27.748000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc48775440 a2=94 a3=2 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.748000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:27.748000 audit: BPF prog-id=208 op=UNLOAD Jan 20 06:39:27.748000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc48775440 a2=0 a3=2 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.748000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:27.756000 audit[5048]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:27.756000 audit[5048]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe38be3910 a2=0 a3=7ffe38be38fc items=0 ppid=2978 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:27.766000 audit[5048]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:27.766000 audit[5048]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe38be3910 a2=0 a3=0 items=0 ppid=2978 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:27.766000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:27.893670 kubelet[2865]: E0120 06:39:27.892938 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:27.897240 kubelet[2865]: E0120 06:39:27.896956 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:39:27.897930 kubelet[2865]: E0120 06:39:27.897844 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:39:28.233000 audit: BPF prog-id=209 op=LOAD Jan 20 06:39:28.233000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc48775300 a2=94 a3=1 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.233000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.233000 audit: BPF prog-id=209 op=UNLOAD Jan 20 06:39:28.233000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc48775300 a2=94 a3=1 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.233000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.248000 audit: BPF prog-id=210 op=LOAD Jan 20 06:39:28.248000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc487752f0 a2=94 a3=4 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.248000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.249000 audit: BPF prog-id=210 op=UNLOAD Jan 20 06:39:28.249000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc487752f0 a2=0 a3=4 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.249000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.253000 audit: BPF prog-id=211 op=LOAD Jan 20 06:39:28.253000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc48775150 a2=94 a3=5 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.253000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.253000 audit: BPF prog-id=211 op=UNLOAD Jan 20 06:39:28.253000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc48775150 a2=0 a3=5 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.253000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.254000 audit: BPF prog-id=212 op=LOAD Jan 20 06:39:28.254000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc48775370 a2=94 a3=6 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.254000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.254000 audit: BPF prog-id=212 op=UNLOAD Jan 20 06:39:28.254000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc48775370 a2=0 a3=6 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.254000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.254000 audit: BPF prog-id=213 op=LOAD Jan 20 06:39:28.254000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc48774b20 a2=94 a3=88 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.254000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.254000 audit: BPF prog-id=214 op=LOAD Jan 20 06:39:28.254000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc487749a0 a2=94 a3=2 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.254000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.254000 audit: BPF prog-id=214 op=UNLOAD Jan 20 06:39:28.254000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc487749d0 a2=0 a3=7ffc48774ad0 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.254000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.256000 audit: BPF prog-id=213 op=UNLOAD Jan 20 06:39:28.256000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=31d6cd10 a2=0 a3=3bee2611d2182234 items=0 ppid=4771 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.256000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:39:28.317000 audit: BPF prog-id=215 op=LOAD Jan 20 06:39:28.317000 audit[5052]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc17431b0 a2=98 a3=1999999999999999 items=0 ppid=4771 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:39:28.317000 audit: BPF prog-id=215 op=UNLOAD Jan 20 06:39:28.317000 audit[5052]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcc1743180 a3=0 items=0 ppid=4771 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:39:28.317000 audit: BPF prog-id=216 op=LOAD Jan 20 06:39:28.317000 audit[5052]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc1743090 a2=94 a3=ffff items=0 ppid=4771 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:39:28.317000 audit: BPF prog-id=216 op=UNLOAD Jan 20 06:39:28.317000 audit[5052]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcc1743090 a2=94 a3=ffff items=0 ppid=4771 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:39:28.317000 audit: BPF prog-id=217 op=LOAD Jan 20 06:39:28.317000 audit[5052]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc17430d0 a2=94 a3=7ffcc17432b0 items=0 ppid=4771 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:39:28.317000 audit: BPF prog-id=217 op=UNLOAD Jan 20 06:39:28.317000 audit[5052]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcc17430d0 a2=94 a3=7ffcc17432b0 items=0 ppid=4771 pid=5052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:39:28.887000 audit[5070]: NETFILTER_CFG table=filter:127 family=2 entries=17 op=nft_register_rule pid=5070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:28.887000 audit[5070]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffea58ed100 a2=0 a3=7ffea58ed0ec items=0 ppid=2978 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:28.892782 systemd-networkd[1524]: vxlan.calico: Link UP Jan 20 06:39:28.892794 systemd-networkd[1524]: vxlan.calico: Gained carrier Jan 20 06:39:28.916918 kubelet[2865]: E0120 06:39:28.916744 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:28.900000 audit[5070]: NETFILTER_CFG table=nat:128 family=2 entries=35 op=nft_register_chain pid=5070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:28.900000 audit[5070]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffea58ed100 a2=0 a3=7ffea58ed0ec items=0 ppid=2978 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:28.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:29.096840 kubelet[2865]: E0120 06:39:29.096537 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:29.154000 audit: BPF prog-id=218 op=LOAD Jan 20 06:39:29.154000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeae97ebe0 a2=98 a3=0 items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.154000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.155000 audit: BPF prog-id=218 op=UNLOAD Jan 20 06:39:29.155000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeae97ebb0 a3=0 items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.155000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.155000 audit: BPF prog-id=219 op=LOAD Jan 20 06:39:29.155000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeae97e9f0 a2=94 a3=54428f items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.155000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.157000 audit: BPF prog-id=219 op=UNLOAD Jan 20 06:39:29.157000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeae97e9f0 a2=94 a3=54428f items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.157000 audit: BPF prog-id=220 op=LOAD Jan 20 06:39:29.157000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeae97ea20 a2=94 a3=2 items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.157000 audit: BPF prog-id=220 op=UNLOAD Jan 20 06:39:29.157000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeae97ea20 a2=0 a3=2 items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.157000 audit: BPF prog-id=221 op=LOAD Jan 20 06:39:29.157000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeae97e7d0 a2=94 a3=4 items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.157000 audit: BPF prog-id=221 op=UNLOAD Jan 20 06:39:29.157000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeae97e7d0 a2=94 a3=4 items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.157000 audit: BPF prog-id=222 op=LOAD Jan 20 06:39:29.157000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeae97e8d0 a2=94 a3=7ffeae97ea50 items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.157000 audit: BPF prog-id=222 op=UNLOAD Jan 20 06:39:29.157000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeae97e8d0 a2=0 a3=7ffeae97ea50 items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.157000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.159000 audit: BPF prog-id=223 op=LOAD Jan 20 06:39:29.159000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeae97e000 a2=94 a3=2 items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.159000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.159000 audit: BPF prog-id=223 op=UNLOAD Jan 20 06:39:29.159000 audit[5082]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeae97e000 a2=0 a3=2 items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.159000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.162000 audit: BPF prog-id=224 op=LOAD Jan 20 06:39:29.162000 audit[5082]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeae97e100 a2=94 a3=30 items=0 ppid=4771 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.162000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:39:29.226000 audit: BPF prog-id=225 op=LOAD Jan 20 06:39:29.226000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc41a3f780 a2=98 a3=0 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.226000 audit: BPF prog-id=225 op=UNLOAD Jan 20 06:39:29.226000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc41a3f750 a3=0 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.226000 audit: BPF prog-id=226 op=LOAD Jan 20 06:39:29.226000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc41a3f570 a2=94 a3=54428f items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.226000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.227000 audit: BPF prog-id=226 op=UNLOAD Jan 20 06:39:29.227000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc41a3f570 a2=94 a3=54428f items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.227000 audit: BPF prog-id=227 op=LOAD Jan 20 06:39:29.227000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc41a3f5a0 a2=94 a3=2 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.227000 audit: BPF prog-id=227 op=UNLOAD Jan 20 06:39:29.227000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc41a3f5a0 a2=0 a3=2 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.227000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.898881 kernel: kauditd_printk_skb: 265 callbacks suppressed Jan 20 06:39:29.899649 kernel: audit: type=1334 audit(1768891169.874:683): prog-id=228 op=LOAD Jan 20 06:39:29.874000 audit: BPF prog-id=228 op=LOAD Jan 20 06:39:29.874000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc41a3f460 a2=94 a3=1 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.948536 kernel: audit: type=1300 audit(1768891169.874:683): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc41a3f460 a2=94 a3=1 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.983823 kubelet[2865]: E0120 06:39:29.961979 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:29.984625 kernel: audit: type=1327 audit(1768891169.874:683): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.874000 audit: BPF prog-id=228 op=UNLOAD Jan 20 06:39:29.874000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc41a3f460 a2=94 a3=1 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:30.041752 kernel: audit: type=1334 audit(1768891169.874:684): prog-id=228 op=UNLOAD Jan 20 06:39:30.041812 kernel: audit: type=1300 audit(1768891169.874:684): arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc41a3f460 a2=94 a3=1 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:30.076619 kernel: audit: type=1327 audit(1768891169.874:684): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.874000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.889000 audit: BPF prog-id=229 op=LOAD Jan 20 06:39:30.139595 kernel: audit: type=1334 audit(1768891169.889:685): prog-id=229 op=LOAD Jan 20 06:39:30.139742 kernel: audit: type=1300 audit(1768891169.889:685): arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc41a3f450 a2=94 a3=4 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.889000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc41a3f450 a2=94 a3=4 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.889000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:30.179713 kernel: audit: type=1327 audit(1768891169.889:685): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.890000 audit: BPF prog-id=229 op=UNLOAD Jan 20 06:39:29.890000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc41a3f450 a2=0 a3=4 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.890000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.890000 audit: BPF prog-id=230 op=LOAD Jan 20 06:39:30.193703 kernel: audit: type=1334 audit(1768891169.890:686): prog-id=229 op=UNLOAD Jan 20 06:39:29.890000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc41a3f2b0 a2=94 a3=5 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.890000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.891000 audit: BPF prog-id=230 op=UNLOAD Jan 20 06:39:29.891000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc41a3f2b0 a2=0 a3=5 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.891000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.891000 audit: BPF prog-id=231 op=LOAD Jan 20 06:39:29.891000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc41a3f4d0 a2=94 a3=6 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.891000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.892000 audit: BPF prog-id=231 op=UNLOAD Jan 20 06:39:29.892000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc41a3f4d0 a2=0 a3=6 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.892000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.892000 audit: BPF prog-id=232 op=LOAD Jan 20 06:39:29.892000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc41a3ec80 a2=94 a3=88 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.892000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.892000 audit: BPF prog-id=233 op=LOAD Jan 20 06:39:29.892000 audit[5091]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc41a3eb00 a2=94 a3=2 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.892000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.892000 audit: BPF prog-id=233 op=UNLOAD Jan 20 06:39:29.892000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc41a3eb30 a2=0 a3=7ffc41a3ec30 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.892000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:29.893000 audit: BPF prog-id=232 op=UNLOAD Jan 20 06:39:29.893000 audit[5091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=11722d10 a2=0 a3=a2e37cc944191178 items=0 ppid=4771 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:29.893000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:39:30.277000 audit: BPF prog-id=224 op=UNLOAD Jan 20 06:39:30.277000 audit[4771]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c001455880 a2=0 a3=0 items=0 ppid=4734 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:30.277000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 06:39:30.683913 systemd-networkd[1524]: vxlan.calico: Gained IPv6LL Jan 20 06:39:30.778000 audit[5115]: NETFILTER_CFG table=nat:129 family=2 entries=15 op=nft_register_chain pid=5115 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:39:30.778000 audit[5115]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd600fed90 a2=0 a3=7ffd600fed7c items=0 ppid=4771 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:30.778000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:39:30.897000 audit[5119]: NETFILTER_CFG table=mangle:130 family=2 entries=16 op=nft_register_chain pid=5119 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:39:30.897000 audit[5119]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc41d99690 a2=0 a3=7ffc41d9967c items=0 ppid=4771 pid=5119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:30.897000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:39:30.901000 audit[5116]: NETFILTER_CFG table=raw:131 family=2 entries=21 op=nft_register_chain pid=5116 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:39:30.901000 audit[5116]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcad0aaaa0 a2=0 a3=7ffcad0aaa8c items=0 ppid=4771 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:30.901000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:39:30.928000 audit[5118]: NETFILTER_CFG table=filter:132 family=2 entries=206 op=nft_register_chain pid=5118 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:39:30.928000 audit[5118]: SYSCALL arch=c000003e syscall=46 success=yes exit=120356 a0=3 a1=7ffc1cdff690 a2=0 a3=55a404586000 items=0 ppid=4771 pid=5118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:30.928000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:39:31.074283 kubelet[2865]: E0120 06:39:31.071785 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:31.079016 containerd[1645]: time="2026-01-20T06:39:31.078626774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t5gmg,Uid:386fb045-c424-4905-ac49-b24568eb8b4b,Namespace:kube-system,Attempt:0,}" Jan 20 06:39:31.079016 containerd[1645]: time="2026-01-20T06:39:31.079624765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp869,Uid:67f738e9-ce9e-42e1-a454-66084ff2d3ad,Namespace:calico-system,Attempt:0,}" Jan 20 06:39:32.147971 systemd-networkd[1524]: califbdaa7a5e80: Link UP Jan 20 06:39:32.149871 systemd-networkd[1524]: califbdaa7a5e80: Gained carrier Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:31.548 [INFO][5132] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kp869-eth0 csi-node-driver- calico-system 67f738e9-ce9e-42e1-a454-66084ff2d3ad 809 0 2026-01-20 06:38:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kp869 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califbdaa7a5e80 [] [] }} ContainerID="b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" Namespace="calico-system" Pod="csi-node-driver-kp869" WorkloadEndpoint="localhost-k8s-csi--node--driver--kp869-" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:31.552 [INFO][5132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" Namespace="calico-system" Pod="csi-node-driver-kp869" WorkloadEndpoint="localhost-k8s-csi--node--driver--kp869-eth0" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:31.855 [INFO][5160] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" HandleID="k8s-pod-network.b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" Workload="localhost-k8s-csi--node--driver--kp869-eth0" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:31.858 [INFO][5160] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" HandleID="k8s-pod-network.b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" Workload="localhost-k8s-csi--node--driver--kp869-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bdf50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kp869", "timestamp":"2026-01-20 06:39:31.855003064 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:31.862 [INFO][5160] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:31.862 [INFO][5160] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:31.862 [INFO][5160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:31.887 [INFO][5160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" host="localhost" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:31.928 [INFO][5160] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:31.975 [INFO][5160] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:31.990 [INFO][5160] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:32.011 [INFO][5160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:32.013 [INFO][5160] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" host="localhost" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:32.026 [INFO][5160] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:32.064 [INFO][5160] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" host="localhost" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:32.085 [INFO][5160] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" host="localhost" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:32.086 [INFO][5160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" host="localhost" Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:32.087 [INFO][5160] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:39:32.210666 containerd[1645]: 2026-01-20 06:39:32.088 [INFO][5160] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" HandleID="k8s-pod-network.b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" Workload="localhost-k8s-csi--node--driver--kp869-eth0" Jan 20 06:39:32.213572 containerd[1645]: 2026-01-20 06:39:32.107 [INFO][5132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" Namespace="calico-system" Pod="csi-node-driver-kp869" WorkloadEndpoint="localhost-k8s-csi--node--driver--kp869-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kp869-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67f738e9-ce9e-42e1-a454-66084ff2d3ad", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kp869", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califbdaa7a5e80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:32.213572 containerd[1645]: 2026-01-20 06:39:32.107 [INFO][5132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" Namespace="calico-system" Pod="csi-node-driver-kp869" WorkloadEndpoint="localhost-k8s-csi--node--driver--kp869-eth0" Jan 20 06:39:32.213572 containerd[1645]: 2026-01-20 06:39:32.110 [INFO][5132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbdaa7a5e80 ContainerID="b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" Namespace="calico-system" Pod="csi-node-driver-kp869" WorkloadEndpoint="localhost-k8s-csi--node--driver--kp869-eth0" Jan 20 06:39:32.213572 containerd[1645]: 2026-01-20 06:39:32.150 [INFO][5132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" Namespace="calico-system" Pod="csi-node-driver-kp869" WorkloadEndpoint="localhost-k8s-csi--node--driver--kp869-eth0" Jan 20 06:39:32.213572 containerd[1645]: 2026-01-20 06:39:32.152 [INFO][5132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" Namespace="calico-system" Pod="csi-node-driver-kp869" WorkloadEndpoint="localhost-k8s-csi--node--driver--kp869-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kp869-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"67f738e9-ce9e-42e1-a454-66084ff2d3ad", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf", Pod:"csi-node-driver-kp869", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califbdaa7a5e80", MAC:"22:bc:32:8c:65:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:32.213572 containerd[1645]: 2026-01-20 06:39:32.199 [INFO][5132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" Namespace="calico-system" Pod="csi-node-driver-kp869" WorkloadEndpoint="localhost-k8s-csi--node--driver--kp869-eth0" Jan 20 06:39:32.383010 systemd-networkd[1524]: cali6f18c656766: Link UP Jan 20 06:39:32.385487 systemd-networkd[1524]: cali6f18c656766: Gained carrier Jan 20 06:39:32.416000 audit[5185]: NETFILTER_CFG table=filter:133 family=2 entries=48 op=nft_register_chain pid=5185 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:39:32.436371 containerd[1645]: time="2026-01-20T06:39:32.435826060Z" level=info msg="connecting to shim b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf" address="unix:///run/containerd/s/5e09cfd117dec7bac16fc30861e10c29c26b4e50a947bf4b86ec6479d0fa2b9b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:32.416000 audit[5185]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffc6ca84760 a2=0 a3=7ffc6ca8474c items=0 ppid=4771 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:32.416000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:31.572 [INFO][5133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0 coredns-668d6bf9bc- kube-system 386fb045-c424-4905-ac49-b24568eb8b4b 940 0 2026-01-20 06:37:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-t5gmg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f18c656766 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" Namespace="kube-system" Pod="coredns-668d6bf9bc-t5gmg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t5gmg-" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:31.575 [INFO][5133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" Namespace="kube-system" Pod="coredns-668d6bf9bc-t5gmg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:31.931 [INFO][5167] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" HandleID="k8s-pod-network.fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" Workload="localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:31.933 [INFO][5167] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" HandleID="k8s-pod-network.fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" Workload="localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-t5gmg", "timestamp":"2026-01-20 06:39:31.931540606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:31.934 [INFO][5167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.087 [INFO][5167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.087 [INFO][5167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.154 [INFO][5167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" host="localhost" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.190 [INFO][5167] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.231 [INFO][5167] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.267 [INFO][5167] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.281 [INFO][5167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.281 [INFO][5167] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" host="localhost" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.294 [INFO][5167] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.319 [INFO][5167] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" host="localhost" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.343 [INFO][5167] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" host="localhost" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.343 [INFO][5167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" host="localhost" Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.343 [INFO][5167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:39:32.466686 containerd[1645]: 2026-01-20 06:39:32.343 [INFO][5167] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" HandleID="k8s-pod-network.fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" Workload="localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0" Jan 20 06:39:32.468552 containerd[1645]: 2026-01-20 06:39:32.369 [INFO][5133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" Namespace="kube-system" Pod="coredns-668d6bf9bc-t5gmg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"386fb045-c424-4905-ac49-b24568eb8b4b", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 37, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-t5gmg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f18c656766", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:32.468552 containerd[1645]: 2026-01-20 06:39:32.370 [INFO][5133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" Namespace="kube-system" Pod="coredns-668d6bf9bc-t5gmg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0" Jan 20 06:39:32.468552 containerd[1645]: 2026-01-20 06:39:32.370 [INFO][5133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f18c656766 ContainerID="fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" Namespace="kube-system" Pod="coredns-668d6bf9bc-t5gmg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0" Jan 20 06:39:32.468552 containerd[1645]: 2026-01-20 06:39:32.382 [INFO][5133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" Namespace="kube-system" Pod="coredns-668d6bf9bc-t5gmg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0" Jan 20 06:39:32.468552 containerd[1645]: 2026-01-20 06:39:32.389 [INFO][5133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" Namespace="kube-system" Pod="coredns-668d6bf9bc-t5gmg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"386fb045-c424-4905-ac49-b24568eb8b4b", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 37, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c", Pod:"coredns-668d6bf9bc-t5gmg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f18c656766", MAC:"3e:1d:89:ef:c8:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:32.468552 containerd[1645]: 2026-01-20 06:39:32.437 [INFO][5133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" Namespace="kube-system" Pod="coredns-668d6bf9bc-t5gmg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--t5gmg-eth0" Jan 20 06:39:32.678006 containerd[1645]: time="2026-01-20T06:39:32.676586239Z" level=info msg="connecting to shim fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c" address="unix:///run/containerd/s/1a6e49dce6f8eefc63d368466961d218779842098af25c7c4b290380970db90d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:32.706700 systemd[1]: Started cri-containerd-b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf.scope - libcontainer container b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf. Jan 20 06:39:32.734000 audit[5240]: NETFILTER_CFG table=filter:134 family=2 entries=48 op=nft_register_chain pid=5240 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:39:32.734000 audit[5240]: SYSCALL arch=c000003e syscall=46 success=yes exit=22720 a0=3 a1=7ffdfaa38a60 a2=0 a3=7ffdfaa38a4c items=0 ppid=4771 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:32.734000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:39:32.875000 audit: BPF prog-id=234 op=LOAD Jan 20 06:39:32.877000 audit: BPF prog-id=235 op=LOAD Jan 20 06:39:32.877000 audit[5213]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c238 a2=98 a3=0 items=0 ppid=5196 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:32.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313335363938623534346264323362373836333635363563373662 Jan 20 06:39:32.877000 audit: BPF prog-id=235 op=UNLOAD Jan 20 06:39:32.877000 audit[5213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5196 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:32.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313335363938623534346264323362373836333635363563373662 Jan 20 06:39:32.877000 audit: BPF prog-id=236 op=LOAD Jan 20 06:39:32.877000 audit[5213]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c488 a2=98 a3=0 items=0 ppid=5196 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:32.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313335363938623534346264323362373836333635363563373662 Jan 20 06:39:32.877000 audit: BPF prog-id=237 op=LOAD Jan 20 06:39:32.877000 audit[5213]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00018c218 a2=98 a3=0 items=0 ppid=5196 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:32.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313335363938623534346264323362373836333635363563373662 Jan 20 06:39:32.877000 audit: BPF prog-id=237 op=UNLOAD Jan 20 06:39:32.877000 audit[5213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5196 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:32.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313335363938623534346264323362373836333635363563373662 Jan 20 06:39:32.877000 audit: BPF prog-id=236 op=UNLOAD Jan 20 06:39:32.877000 audit[5213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5196 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:32.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313335363938623534346264323362373836333635363563373662 Jan 20 06:39:32.877000 audit: BPF prog-id=238 op=LOAD Jan 20 06:39:32.877000 audit[5213]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00018c6e8 a2=98 a3=0 items=0 ppid=5196 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:32.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237313335363938623534346264323362373836333635363563373662 Jan 20 06:39:32.900554 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 06:39:32.929799 systemd[1]: Started cri-containerd-fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c.scope - libcontainer container fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c. Jan 20 06:39:33.042000 audit: BPF prog-id=239 op=LOAD Jan 20 06:39:33.049000 audit: BPF prog-id=240 op=LOAD Jan 20 06:39:33.049000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f0238 a2=98 a3=0 items=0 ppid=5234 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665356266373430343462633435376462333366333434373131623435 Jan 20 06:39:33.049000 audit: BPF prog-id=240 op=UNLOAD Jan 20 06:39:33.049000 audit[5249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5234 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665356266373430343462633435376462333366333434373131623435 Jan 20 06:39:33.050000 audit: BPF prog-id=241 op=LOAD Jan 20 06:39:33.050000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f0488 a2=98 a3=0 items=0 ppid=5234 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665356266373430343462633435376462333366333434373131623435 Jan 20 06:39:33.050000 audit: BPF prog-id=242 op=LOAD Jan 20 06:39:33.050000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001f0218 a2=98 a3=0 items=0 ppid=5234 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665356266373430343462633435376462333366333434373131623435 Jan 20 06:39:33.050000 audit: BPF prog-id=242 op=UNLOAD Jan 20 06:39:33.050000 audit[5249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5234 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665356266373430343462633435376462333366333434373131623435 Jan 20 06:39:33.050000 audit: BPF prog-id=241 op=UNLOAD Jan 20 06:39:33.050000 audit[5249]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5234 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665356266373430343462633435376462333366333434373131623435 Jan 20 06:39:33.050000 audit: BPF prog-id=243 op=LOAD Jan 20 06:39:33.050000 audit[5249]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001f06e8 a2=98 a3=0 items=0 ppid=5234 pid=5249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665356266373430343462633435376462333366333434373131623435 Jan 20 06:39:33.063714 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 06:39:33.078507 containerd[1645]: time="2026-01-20T06:39:33.078332259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kp869,Uid:67f738e9-ce9e-42e1-a454-66084ff2d3ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7135698b544bd23b78636565c76b7aedb07e09a67381ca26771d23b5f5605cf\"" Jan 20 06:39:33.080757 containerd[1645]: time="2026-01-20T06:39:33.079693377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-5v8sm,Uid:8605c7f4-dda9-48f9-8faf-f356da42c13a,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:39:33.086549 containerd[1645]: time="2026-01-20T06:39:33.084923408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:39:33.086549 containerd[1645]: time="2026-01-20T06:39:33.079698987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdff59b4-bvgmz,Uid:1fb741a2-9573-41fd-9b50-18c9b4a4a79a,Namespace:calico-system,Attempt:0,}" Jan 20 06:39:33.258489 containerd[1645]: time="2026-01-20T06:39:33.255528087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:33.263631 containerd[1645]: time="2026-01-20T06:39:33.263597133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:39:33.263842 containerd[1645]: time="2026-01-20T06:39:33.263823986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:33.264844 kubelet[2865]: E0120 06:39:33.264811 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:39:33.266459 kubelet[2865]: E0120 06:39:33.265558 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:39:33.266459 kubelet[2865]: E0120 06:39:33.266392 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk6rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:33.268854 containerd[1645]: time="2026-01-20T06:39:33.268827605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:39:33.364669 containerd[1645]: time="2026-01-20T06:39:33.361531798Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:33.369677 containerd[1645]: time="2026-01-20T06:39:33.369350785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:39:33.371361 containerd[1645]: time="2026-01-20T06:39:33.371022592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:33.371670 kubelet[2865]: E0120 06:39:33.371616 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:39:33.377336 kubelet[2865]: E0120 06:39:33.376648 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:39:33.377336 kubelet[2865]: E0120 06:39:33.376829 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk6rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:33.379464 kubelet[2865]: E0120 06:39:33.378785 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:39:33.421366 containerd[1645]: time="2026-01-20T06:39:33.421321475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t5gmg,Uid:386fb045-c424-4905-ac49-b24568eb8b4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c\"" Jan 20 06:39:33.425512 kubelet[2865]: E0120 06:39:33.425482 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:33.439489 containerd[1645]: time="2026-01-20T06:39:33.438016284Z" level=info msg="CreateContainer within sandbox \"fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 06:39:33.510953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226767794.mount: Deactivated successfully. Jan 20 06:39:33.527615 containerd[1645]: time="2026-01-20T06:39:33.527571292Z" level=info msg="Container e4f59620e0de4d919685e9202e5adac6a8c9709ae1195f4df9cf01c3933af822: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:39:33.529005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060814752.mount: Deactivated successfully. Jan 20 06:39:33.578670 containerd[1645]: time="2026-01-20T06:39:33.578459394Z" level=info msg="CreateContainer within sandbox \"fe5bf74044bc457db33f344711b45a0e2aa329610968cd7896b1b5aefe734b4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4f59620e0de4d919685e9202e5adac6a8c9709ae1195f4df9cf01c3933af822\"" Jan 20 06:39:33.584656 containerd[1645]: time="2026-01-20T06:39:33.584623718Z" level=info msg="StartContainer for \"e4f59620e0de4d919685e9202e5adac6a8c9709ae1195f4df9cf01c3933af822\"" Jan 20 06:39:33.587696 containerd[1645]: time="2026-01-20T06:39:33.587664128Z" level=info msg="connecting to shim e4f59620e0de4d919685e9202e5adac6a8c9709ae1195f4df9cf01c3933af822" address="unix:///run/containerd/s/1a6e49dce6f8eefc63d368466961d218779842098af25c7c4b290380970db90d" protocol=ttrpc version=3 Jan 20 06:39:33.746020 systemd[1]: Started cri-containerd-e4f59620e0de4d919685e9202e5adac6a8c9709ae1195f4df9cf01c3933af822.scope - libcontainer container e4f59620e0de4d919685e9202e5adac6a8c9709ae1195f4df9cf01c3933af822. Jan 20 06:39:33.757681 systemd-networkd[1524]: califbdaa7a5e80: Gained IPv6LL Jan 20 06:39:33.844000 audit: BPF prog-id=244 op=LOAD Jan 20 06:39:33.845000 audit: BPF prog-id=245 op=LOAD Jan 20 06:39:33.845000 audit[5320]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5234 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663539363230653064653464393139363835653932303265356164 Jan 20 06:39:33.846000 audit: BPF prog-id=245 op=UNLOAD Jan 20 06:39:33.846000 audit[5320]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5234 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663539363230653064653464393139363835653932303265356164 Jan 20 06:39:33.846000 audit: BPF prog-id=246 op=LOAD Jan 20 06:39:33.846000 audit[5320]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5234 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663539363230653064653464393139363835653932303265356164 Jan 20 06:39:33.846000 audit: BPF prog-id=247 op=LOAD Jan 20 06:39:33.846000 audit[5320]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5234 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663539363230653064653464393139363835653932303265356164 Jan 20 06:39:33.846000 audit: BPF prog-id=247 op=UNLOAD Jan 20 06:39:33.846000 audit[5320]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5234 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663539363230653064653464393139363835653932303265356164 Jan 20 06:39:33.846000 audit: BPF prog-id=246 op=UNLOAD Jan 20 06:39:33.846000 audit[5320]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5234 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663539363230653064653464393139363835653932303265356164 Jan 20 06:39:33.846000 audit: BPF prog-id=248 op=LOAD Jan 20 06:39:33.846000 audit[5320]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5234 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:33.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663539363230653064653464393139363835653932303265356164 Jan 20 06:39:33.959849 containerd[1645]: time="2026-01-20T06:39:33.959700294Z" level=info msg="StartContainer for \"e4f59620e0de4d919685e9202e5adac6a8c9709ae1195f4df9cf01c3933af822\" returns successfully" Jan 20 06:39:34.037409 kubelet[2865]: E0120 06:39:34.036003 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:39:34.044407 kubelet[2865]: E0120 06:39:34.043950 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:34.075636 containerd[1645]: time="2026-01-20T06:39:34.074003710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb7ff584c-brrnn,Uid:1b97c41d-4ead-4c93-97f0-70532331e2e7,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:39:34.116500 systemd-networkd[1524]: calid6df2f5f03f: Link UP Jan 20 06:39:34.142017 systemd-networkd[1524]: cali6f18c656766: Gained IPv6LL Jan 20 06:39:34.150511 systemd-networkd[1524]: calid6df2f5f03f: Gained carrier Jan 20 06:39:34.288606 kubelet[2865]: I0120 06:39:34.284848 2865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t5gmg" podStartSLOduration=95.284824916 podStartE2EDuration="1m35.284824916s" podCreationTimestamp="2026-01-20 06:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:39:34.24459791 +0000 UTC m=+99.451073983" watchObservedRunningTime="2026-01-20 06:39:34.284824916 +0000 UTC m=+99.491300949" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.558 [INFO][5281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0 calico-apiserver-6f8db8dd5b- calico-apiserver 8605c7f4-dda9-48f9-8faf-f356da42c13a 942 0 2026-01-20 06:38:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f8db8dd5b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f8db8dd5b-5v8sm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid6df2f5f03f [] [] }} ContainerID="c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-5v8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.564 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-5v8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.795 [INFO][5325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" HandleID="k8s-pod-network.c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" Workload="localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.800 [INFO][5325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" HandleID="k8s-pod-network.c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" Workload="localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000337400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f8db8dd5b-5v8sm", "timestamp":"2026-01-20 06:39:33.795794877 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.801 [INFO][5325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.801 [INFO][5325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.801 [INFO][5325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.858 [INFO][5325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" host="localhost" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.888 [INFO][5325] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.927 [INFO][5325] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.953 [INFO][5325] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.985 [INFO][5325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:33.987 [INFO][5325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" host="localhost" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:34.003 [INFO][5325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:34.041 [INFO][5325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" host="localhost" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:34.071 [INFO][5325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" host="localhost" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:34.072 [INFO][5325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" host="localhost" Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:34.076 [INFO][5325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:39:34.442606 containerd[1645]: 2026-01-20 06:39:34.079 [INFO][5325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" HandleID="k8s-pod-network.c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" Workload="localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0" Jan 20 06:39:34.451999 containerd[1645]: 2026-01-20 06:39:34.091 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-5v8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0", GenerateName:"calico-apiserver-6f8db8dd5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8605c7f4-dda9-48f9-8faf-f356da42c13a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8db8dd5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f8db8dd5b-5v8sm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid6df2f5f03f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:34.451999 containerd[1645]: 2026-01-20 06:39:34.092 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-5v8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0" Jan 20 06:39:34.451999 containerd[1645]: 2026-01-20 06:39:34.092 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6df2f5f03f ContainerID="c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-5v8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0" Jan 20 06:39:34.451999 containerd[1645]: 2026-01-20 06:39:34.156 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-5v8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0" Jan 20 06:39:34.451999 containerd[1645]: 2026-01-20 06:39:34.172 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-5v8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0", GenerateName:"calico-apiserver-6f8db8dd5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8605c7f4-dda9-48f9-8faf-f356da42c13a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8db8dd5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f", Pod:"calico-apiserver-6f8db8dd5b-5v8sm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid6df2f5f03f", MAC:"16:aa:f7:7f:c0:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:34.451999 containerd[1645]: 2026-01-20 06:39:34.368 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" Namespace="calico-apiserver" Pod="calico-apiserver-6f8db8dd5b-5v8sm" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f8db8dd5b--5v8sm-eth0" Jan 20 06:39:34.590746 systemd-networkd[1524]: cali5c844ebcff4: Link UP Jan 20 06:39:34.593402 systemd-networkd[1524]: cali5c844ebcff4: Gained carrier Jan 20 06:39:34.660000 audit[5398]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:34.660000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc9e710370 a2=0 a3=7ffc9e71035c items=0 ppid=2978 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:34.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:33.560 [INFO][5294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0 calico-kube-controllers-54fdff59b4- calico-system 1fb741a2-9573-41fd-9b50-18c9b4a4a79a 937 0 2026-01-20 06:38:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54fdff59b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54fdff59b4-bvgmz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5c844ebcff4 [] [] }} ContainerID="3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" Namespace="calico-system" Pod="calico-kube-controllers-54fdff59b4-bvgmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:33.561 [INFO][5294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" Namespace="calico-system" Pod="calico-kube-controllers-54fdff59b4-bvgmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:33.803 [INFO][5322] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" HandleID="k8s-pod-network.3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" Workload="localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:33.804 [INFO][5322] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" HandleID="k8s-pod-network.3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" Workload="localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000512630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54fdff59b4-bvgmz", "timestamp":"2026-01-20 06:39:33.803727516 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:33.804 [INFO][5322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.073 [INFO][5322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.076 [INFO][5322] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.156 [INFO][5322] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" host="localhost" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.207 [INFO][5322] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.298 [INFO][5322] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.316 [INFO][5322] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.353 [INFO][5322] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.353 [INFO][5322] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" host="localhost" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.368 [INFO][5322] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.411 [INFO][5322] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" host="localhost" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.471 [INFO][5322] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" host="localhost" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.471 [INFO][5322] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" host="localhost" Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.471 [INFO][5322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:39:34.730758 containerd[1645]: 2026-01-20 06:39:34.471 [INFO][5322] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" HandleID="k8s-pod-network.3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" Workload="localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0" Jan 20 06:39:34.734618 containerd[1645]: 2026-01-20 06:39:34.550 [INFO][5294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" Namespace="calico-system" Pod="calico-kube-controllers-54fdff59b4-bvgmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0", GenerateName:"calico-kube-controllers-54fdff59b4-", Namespace:"calico-system", SelfLink:"", UID:"1fb741a2-9573-41fd-9b50-18c9b4a4a79a", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54fdff59b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54fdff59b4-bvgmz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c844ebcff4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:34.734618 containerd[1645]: 2026-01-20 06:39:34.550 [INFO][5294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" Namespace="calico-system" Pod="calico-kube-controllers-54fdff59b4-bvgmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0" Jan 20 06:39:34.734618 containerd[1645]: 2026-01-20 06:39:34.551 [INFO][5294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c844ebcff4 ContainerID="3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" Namespace="calico-system" Pod="calico-kube-controllers-54fdff59b4-bvgmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0" Jan 20 06:39:34.734618 containerd[1645]: 2026-01-20 06:39:34.594 [INFO][5294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" Namespace="calico-system" Pod="calico-kube-controllers-54fdff59b4-bvgmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0" Jan 20 06:39:34.734618 containerd[1645]: 2026-01-20 06:39:34.598 [INFO][5294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" Namespace="calico-system" Pod="calico-kube-controllers-54fdff59b4-bvgmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0", GenerateName:"calico-kube-controllers-54fdff59b4-", Namespace:"calico-system", SelfLink:"", UID:"1fb741a2-9573-41fd-9b50-18c9b4a4a79a", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54fdff59b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc", Pod:"calico-kube-controllers-54fdff59b4-bvgmz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c844ebcff4", MAC:"5e:e1:50:33:fd:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:34.734618 containerd[1645]: 2026-01-20 06:39:34.701 [INFO][5294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" Namespace="calico-system" Pod="calico-kube-controllers-54fdff59b4-bvgmz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54fdff59b4--bvgmz-eth0" Jan 20 06:39:34.757000 audit[5398]: NETFILTER_CFG table=nat:136 family=2 entries=44 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:34.757000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc9e710370 a2=0 a3=7ffc9e71035c items=0 ppid=2978 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:34.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:34.775999 containerd[1645]: time="2026-01-20T06:39:34.774817449Z" level=info msg="connecting to shim c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f" address="unix:///run/containerd/s/551b89a6ca16d4994acaeafe4c307b3a2b58428873ba3cbfeda7cb6ba6cea70b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:34.795000 audit[5400]: NETFILTER_CFG table=filter:137 family=2 entries=63 op=nft_register_chain pid=5400 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:39:34.795000 audit[5400]: SYSCALL arch=c000003e syscall=46 success=yes exit=30680 a0=3 a1=7ffcb5e04270 a2=0 a3=7ffcb5e0425c items=0 ppid=4771 pid=5400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:34.795000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:39:34.982610 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 20 06:39:34.987311 kernel: audit: type=1325 audit(1768891174.961:729): table=filter:138 family=2 entries=56 op=nft_register_chain pid=5439 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:39:34.961000 audit[5439]: NETFILTER_CFG table=filter:138 family=2 entries=56 op=nft_register_chain pid=5439 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:39:34.961000 audit[5439]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7ffd79ef9cf0 a2=0 a3=7ffd79ef9cdc items=0 ppid=4771 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.082596 kernel: audit: type=1300 audit(1768891174.961:729): arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7ffd79ef9cf0 a2=0 a3=7ffd79ef9cdc items=0 ppid=4771 pid=5439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.082839 kernel: audit: type=1327 audit(1768891174.961:729): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:39:34.961000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:39:35.082933 containerd[1645]: time="2026-01-20T06:39:35.031008384Z" level=info msg="connecting to shim 3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc" address="unix:///run/containerd/s/2116ceb5669036b0280246e3d991c84dd81879b1c58650bdb9c01216c679849b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:35.088955 kubelet[2865]: E0120 06:39:35.085636 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:35.120981 kubelet[2865]: E0120 06:39:35.117648 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:39:35.277555 systemd[1]: Started cri-containerd-c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f.scope - libcontainer container c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f. Jan 20 06:39:35.356898 systemd-networkd[1524]: cali05553b42cea: Link UP Jan 20 06:39:35.357578 systemd-networkd[1524]: cali05553b42cea: Gained carrier Jan 20 06:39:35.526000 audit[5493]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:35.559529 kernel: audit: type=1325 audit(1768891175.526:730): table=filter:139 family=2 entries=14 op=nft_register_rule pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:34.574 [INFO][5373] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0 calico-apiserver-5bb7ff584c- calico-apiserver 1b97c41d-4ead-4c93-97f0-70532331e2e7 949 0 2026-01-20 06:38:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bb7ff584c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5bb7ff584c-brrnn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali05553b42cea [] [] }} ContainerID="0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" Namespace="calico-apiserver" Pod="calico-apiserver-5bb7ff584c-brrnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:34.577 [INFO][5373] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" Namespace="calico-apiserver" Pod="calico-apiserver-5bb7ff584c-brrnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:34.870 [INFO][5403] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" HandleID="k8s-pod-network.0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" Workload="localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:34.874 [INFO][5403] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" HandleID="k8s-pod-network.0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" Workload="localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5bb7ff584c-brrnn", "timestamp":"2026-01-20 06:39:34.870575049 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:34.874 [INFO][5403] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:34.874 [INFO][5403] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:34.874 [INFO][5403] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:34.988 [INFO][5403] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" host="localhost" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:35.038 [INFO][5403] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:35.114 [INFO][5403] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:35.133 [INFO][5403] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:35.158 [INFO][5403] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:35.158 [INFO][5403] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" host="localhost" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:35.176 [INFO][5403] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503 Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:35.230 [INFO][5403] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" host="localhost" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:35.289 [INFO][5403] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" host="localhost" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:35.289 [INFO][5403] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" host="localhost" Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:35.289 [INFO][5403] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:39:35.565465 containerd[1645]: 2026-01-20 06:39:35.289 [INFO][5403] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" HandleID="k8s-pod-network.0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" Workload="localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0" Jan 20 06:39:35.526000 audit[5493]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed676e0d0 a2=0 a3=7ffed676e0bc items=0 ppid=2978 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.567486 containerd[1645]: 2026-01-20 06:39:35.324 [INFO][5373] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" Namespace="calico-apiserver" Pod="calico-apiserver-5bb7ff584c-brrnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0", GenerateName:"calico-apiserver-5bb7ff584c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b97c41d-4ead-4c93-97f0-70532331e2e7", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb7ff584c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5bb7ff584c-brrnn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05553b42cea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:35.567486 containerd[1645]: 2026-01-20 06:39:35.324 [INFO][5373] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" Namespace="calico-apiserver" Pod="calico-apiserver-5bb7ff584c-brrnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0" Jan 20 06:39:35.567486 containerd[1645]: 2026-01-20 06:39:35.324 [INFO][5373] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05553b42cea ContainerID="0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" Namespace="calico-apiserver" Pod="calico-apiserver-5bb7ff584c-brrnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0" Jan 20 06:39:35.567486 containerd[1645]: 2026-01-20 06:39:35.402 [INFO][5373] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" Namespace="calico-apiserver" Pod="calico-apiserver-5bb7ff584c-brrnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0" Jan 20 06:39:35.567486 containerd[1645]: 2026-01-20 06:39:35.404 [INFO][5373] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" Namespace="calico-apiserver" Pod="calico-apiserver-5bb7ff584c-brrnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0", GenerateName:"calico-apiserver-5bb7ff584c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b97c41d-4ead-4c93-97f0-70532331e2e7", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 38, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bb7ff584c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503", Pod:"calico-apiserver-5bb7ff584c-brrnn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05553b42cea", MAC:"1a:aa:c1:49:4d:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:39:35.567486 containerd[1645]: 2026-01-20 06:39:35.502 [INFO][5373] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" Namespace="calico-apiserver" Pod="calico-apiserver-5bb7ff584c-brrnn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5bb7ff584c--brrnn-eth0" Jan 20 06:39:35.600719 kernel: audit: type=1300 audit(1768891175.526:730): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed676e0d0 a2=0 a3=7ffed676e0bc items=0 ppid=2978 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:35.616293 kernel: audit: type=1327 audit(1768891175.526:730): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:35.618368 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 06:39:35.620816 systemd[1]: Started cri-containerd-3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc.scope - libcontainer container 3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc. Jan 20 06:39:35.560000 audit: BPF prog-id=249 op=LOAD Jan 20 06:39:35.635347 kernel: audit: type=1334 audit(1768891175.560:731): prog-id=249 op=LOAD Jan 20 06:39:35.561000 audit: BPF prog-id=250 op=LOAD Jan 20 06:39:35.665680 kernel: audit: type=1334 audit(1768891175.561:732): prog-id=250 op=LOAD Jan 20 06:39:35.561000 audit[5438]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5417 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336643363326630356266623736666539643434626663353862343832 Jan 20 06:39:35.719414 containerd[1645]: time="2026-01-20T06:39:35.704285069Z" level=info msg="connecting to shim 0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503" address="unix:///run/containerd/s/b5f4b49104485005828a9e25ccc45b12dbf21804c762e689fa848297d5153977" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:39:35.724680 kernel: audit: type=1300 audit(1768891175.561:732): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5417 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.724863 kernel: audit: type=1327 audit(1768891175.561:732): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336643363326630356266623736666539643434626663353862343832 Jan 20 06:39:35.561000 audit: BPF prog-id=250 op=UNLOAD Jan 20 06:39:35.561000 audit[5438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5417 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336643363326630356266623736666539643434626663353862343832 Jan 20 06:39:35.561000 audit: BPF prog-id=251 op=LOAD Jan 20 06:39:35.561000 audit[5438]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5417 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.561000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336643363326630356266623736666539643434626663353862343832 Jan 20 06:39:35.563000 audit: BPF prog-id=252 op=LOAD Jan 20 06:39:35.563000 audit[5438]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5417 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336643363326630356266623736666539643434626663353862343832 Jan 20 06:39:35.563000 audit: BPF prog-id=252 op=UNLOAD Jan 20 06:39:35.563000 audit[5438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5417 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336643363326630356266623736666539643434626663353862343832 Jan 20 06:39:35.563000 audit: BPF prog-id=251 op=UNLOAD Jan 20 06:39:35.563000 audit[5438]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5417 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336643363326630356266623736666539643434626663353862343832 Jan 20 06:39:35.563000 audit: BPF prog-id=253 op=LOAD Jan 20 06:39:35.563000 audit[5438]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5417 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336643363326630356266623736666539643434626663353862343832 Jan 20 06:39:35.668000 audit[5493]: NETFILTER_CFG table=nat:140 family=2 entries=56 op=nft_register_chain pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:35.668000 audit[5493]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffed676e0d0 a2=0 a3=7ffed676e0bc items=0 ppid=2978 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.668000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:35.750000 audit[5509]: NETFILTER_CFG table=filter:141 family=2 entries=61 op=nft_register_chain pid=5509 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:39:35.750000 audit[5509]: SYSCALL arch=c000003e syscall=46 success=yes exit=29000 a0=3 a1=7ffeb1829f80 a2=0 a3=7ffeb1829f6c items=0 ppid=4771 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.750000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:39:35.799000 audit: BPF prog-id=254 op=LOAD Jan 20 06:39:35.800000 audit: BPF prog-id=255 op=LOAD Jan 20 06:39:35.800000 audit[5471]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5448 pid=5471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366396133663430633532366234626330343934373631373062643639 Jan 20 06:39:35.800000 audit: BPF prog-id=255 op=UNLOAD Jan 20 06:39:35.800000 audit[5471]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5448 pid=5471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366396133663430633532366234626330343934373631373062643639 Jan 20 06:39:35.803000 audit: BPF prog-id=256 op=LOAD Jan 20 06:39:35.803000 audit[5471]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5448 pid=5471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366396133663430633532366234626330343934373631373062643639 Jan 20 06:39:35.804000 audit: BPF prog-id=257 op=LOAD Jan 20 06:39:35.804000 audit[5471]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5448 pid=5471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366396133663430633532366234626330343934373631373062643639 Jan 20 06:39:35.804000 audit: BPF prog-id=257 op=UNLOAD Jan 20 06:39:35.804000 audit[5471]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5448 pid=5471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366396133663430633532366234626330343934373631373062643639 Jan 20 06:39:35.804000 audit: BPF prog-id=256 op=UNLOAD Jan 20 06:39:35.804000 audit[5471]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5448 pid=5471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366396133663430633532366234626330343934373631373062643639 Jan 20 06:39:35.804000 audit: BPF prog-id=258 op=LOAD Jan 20 06:39:35.804000 audit[5471]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5448 pid=5471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366396133663430633532366234626330343934373631373062643639 Jan 20 06:39:35.809270 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 06:39:35.854394 systemd[1]: Started cri-containerd-0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503.scope - libcontainer container 0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503. Jan 20 06:39:35.901377 containerd[1645]: time="2026-01-20T06:39:35.901325858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8db8dd5b-5v8sm,Uid:8605c7f4-dda9-48f9-8faf-f356da42c13a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c6d3c2f05bfb76fe9d44bfc58b482e4f88619529fb6bce8996bf0bcab315e51f\"" Jan 20 06:39:35.910134 containerd[1645]: time="2026-01-20T06:39:35.909561784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:39:35.976000 audit: BPF prog-id=259 op=LOAD Jan 20 06:39:35.978000 audit: BPF prog-id=260 op=LOAD Jan 20 06:39:35.978000 audit[5532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5514 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303365393237626334666231316538353035636566303137306361 Jan 20 06:39:35.978000 audit: BPF prog-id=260 op=UNLOAD Jan 20 06:39:35.978000 audit[5532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5514 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303365393237626334666231316538353035636566303137306361 Jan 20 06:39:35.978000 audit: BPF prog-id=261 op=LOAD Jan 20 06:39:35.978000 audit[5532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5514 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303365393237626334666231316538353035636566303137306361 Jan 20 06:39:35.978000 audit: BPF prog-id=262 op=LOAD Jan 20 06:39:35.978000 audit[5532]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5514 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303365393237626334666231316538353035636566303137306361 Jan 20 06:39:35.978000 audit: BPF prog-id=262 op=UNLOAD Jan 20 06:39:35.978000 audit[5532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5514 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303365393237626334666231316538353035636566303137306361 Jan 20 06:39:35.978000 audit: BPF prog-id=261 op=UNLOAD Jan 20 06:39:35.978000 audit[5532]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5514 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303365393237626334666231316538353035636566303137306361 Jan 20 06:39:35.978000 audit: BPF prog-id=263 op=LOAD Jan 20 06:39:35.978000 audit[5532]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5514 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:35.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036303365393237626334666231316538353035636566303137306361 Jan 20 06:39:35.986293 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 06:39:36.006655 containerd[1645]: time="2026-01-20T06:39:36.006597333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:36.036535 containerd[1645]: time="2026-01-20T06:39:36.036477880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:36.038309 containerd[1645]: time="2026-01-20T06:39:36.036762570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:39:36.040001 kubelet[2865]: E0120 06:39:36.039547 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:36.040001 kubelet[2865]: E0120 06:39:36.039620 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:36.045774 kubelet[2865]: E0120 06:39:36.039898 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tlcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f8db8dd5b-5v8sm_calico-apiserver(8605c7f4-dda9-48f9-8faf-f356da42c13a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:36.048502 kubelet[2865]: E0120 06:39:36.047915 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:39:36.054815 containerd[1645]: time="2026-01-20T06:39:36.054728174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54fdff59b4-bvgmz,Uid:1fb741a2-9573-41fd-9b50-18c9b4a4a79a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f9a3f40c526b4bc049476170bd696150b738b366de895543b0811a8b0664ffc\"" Jan 20 06:39:36.062928 systemd-networkd[1524]: calid6df2f5f03f: Gained IPv6LL Jan 20 06:39:36.069385 containerd[1645]: time="2026-01-20T06:39:36.069303888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:39:36.110795 kubelet[2865]: E0120 06:39:36.108013 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:39:36.128896 kubelet[2865]: E0120 06:39:36.127434 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:36.184911 containerd[1645]: time="2026-01-20T06:39:36.184736962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:36.193621 containerd[1645]: time="2026-01-20T06:39:36.192508533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:39:36.193621 containerd[1645]: time="2026-01-20T06:39:36.192655257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:36.196333 kubelet[2865]: E0120 06:39:36.194264 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:39:36.196333 kubelet[2865]: E0120 06:39:36.194631 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:39:36.196333 kubelet[2865]: E0120 06:39:36.194787 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qqqgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54fdff59b4-bvgmz_calico-system(1fb741a2-9573-41fd-9b50-18c9b4a4a79a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:36.198700 kubelet[2865]: E0120 06:39:36.196423 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:39:36.228584 containerd[1645]: time="2026-01-20T06:39:36.228332340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bb7ff584c-brrnn,Uid:1b97c41d-4ead-4c93-97f0-70532331e2e7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0603e927bc4fb11e8505cef0170cae9bc8a6238969b58fe21c49d42fd7b6e503\"" Jan 20 06:39:36.243623 containerd[1645]: time="2026-01-20T06:39:36.243571285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:39:36.256000 audit[5576]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:36.256000 audit[5576]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd314b070 a2=0 a3=7ffcd314b05c items=0 ppid=2978 pid=5576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:36.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:36.268000 audit[5576]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:36.268000 audit[5576]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcd314b070 a2=0 a3=7ffcd314b05c items=0 ppid=2978 pid=5576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:36.268000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:36.324333 containerd[1645]: time="2026-01-20T06:39:36.323989083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:36.330768 containerd[1645]: time="2026-01-20T06:39:36.330577537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:39:36.330768 containerd[1645]: time="2026-01-20T06:39:36.330683245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:36.332550 kubelet[2865]: E0120 06:39:36.332427 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:36.332550 kubelet[2865]: E0120 06:39:36.332488 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:36.333771 kubelet[2865]: E0120 06:39:36.333589 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wzwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bb7ff584c-brrnn_calico-apiserver(1b97c41d-4ead-4c93-97f0-70532331e2e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:36.337293 kubelet[2865]: E0120 06:39:36.336481 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:39:36.380357 systemd-networkd[1524]: cali5c844ebcff4: Gained IPv6LL Jan 20 06:39:37.139699 kubelet[2865]: E0120 06:39:37.138717 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:39:37.139699 kubelet[2865]: E0120 06:39:37.139294 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:39:37.139699 kubelet[2865]: E0120 06:39:37.139407 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:39:37.213274 systemd-networkd[1524]: cali05553b42cea: Gained IPv6LL Jan 20 06:39:37.279000 audit[5580]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:37.279000 audit[5580]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff091e5890 a2=0 a3=7fff091e587c items=0 ppid=2978 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:37.279000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:37.288000 audit[5580]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:39:37.288000 audit[5580]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff091e5890 a2=0 a3=7fff091e587c items=0 ppid=2978 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:37.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:39:38.141296 kubelet[2865]: E0120 06:39:38.140687 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:39:39.078883 containerd[1645]: time="2026-01-20T06:39:39.078526296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:39:39.149500 containerd[1645]: time="2026-01-20T06:39:39.149016207Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:39.153681 containerd[1645]: time="2026-01-20T06:39:39.153537384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:39:39.153681 containerd[1645]: time="2026-01-20T06:39:39.153639244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:39.154410 kubelet[2865]: E0120 06:39:39.154366 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:39:39.155012 kubelet[2865]: E0120 06:39:39.154774 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:39:39.155432 kubelet[2865]: E0120 06:39:39.154989 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8c510b35c9db4f5cba555b64598fab18,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cx8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7688649cc6-vz554_calico-system(85a3d7fc-92d2-477e-a3c6-cf998fc60fae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:39.164726 containerd[1645]: time="2026-01-20T06:39:39.164392155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:39:39.295805 containerd[1645]: time="2026-01-20T06:39:39.295628769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:39.300780 containerd[1645]: time="2026-01-20T06:39:39.300575566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:39:39.300780 containerd[1645]: time="2026-01-20T06:39:39.300754840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:39.304456 kubelet[2865]: E0120 06:39:39.301746 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:39:39.304456 kubelet[2865]: E0120 06:39:39.303957 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:39:39.306293 kubelet[2865]: E0120 06:39:39.305990 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cx8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7688649cc6-vz554_calico-system(85a3d7fc-92d2-477e-a3c6-cf998fc60fae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:39.307740 kubelet[2865]: E0120 06:39:39.307693 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:39:40.076508 kubelet[2865]: E0120 06:39:40.075873 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:40.080994 containerd[1645]: time="2026-01-20T06:39:40.080944370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:39:40.174561 containerd[1645]: time="2026-01-20T06:39:40.173946027Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:40.178357 containerd[1645]: time="2026-01-20T06:39:40.178168287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:39:40.178357 containerd[1645]: time="2026-01-20T06:39:40.178337362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:40.178713 kubelet[2865]: E0120 06:39:40.178526 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:39:40.178713 kubelet[2865]: E0120 06:39:40.178590 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:39:40.179836 kubelet[2865]: E0120 06:39:40.178949 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g29tx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-grqpc_calico-system(1d1bd19b-efe8-47e1-8a7a-7256f246c0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:40.181447 kubelet[2865]: E0120 06:39:40.181317 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:39:40.444784 update_engine[1624]: I20260120 06:39:40.442813 1624 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 06:39:40.444784 update_engine[1624]: I20260120 06:39:40.442963 1624 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 06:39:40.446883 update_engine[1624]: I20260120 06:39:40.446645 1624 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 06:39:40.447913 update_engine[1624]: I20260120 06:39:40.447798 1624 omaha_request_params.cc:62] Current group set to developer Jan 20 06:39:40.448852 update_engine[1624]: I20260120 06:39:40.448288 1624 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 06:39:40.448852 update_engine[1624]: I20260120 06:39:40.448307 1624 update_attempter.cc:643] Scheduling an action processor start. Jan 20 06:39:40.448852 update_engine[1624]: I20260120 06:39:40.448334 1624 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 06:39:40.448852 update_engine[1624]: I20260120 06:39:40.448402 1624 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 06:39:40.448852 update_engine[1624]: I20260120 06:39:40.448482 1624 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 06:39:40.448852 update_engine[1624]: I20260120 06:39:40.448494 1624 omaha_request_action.cc:272] Request: Jan 20 06:39:40.448852 update_engine[1624]: Jan 20 06:39:40.448852 update_engine[1624]: Jan 20 06:39:40.448852 update_engine[1624]: Jan 20 06:39:40.448852 update_engine[1624]: Jan 20 06:39:40.448852 update_engine[1624]: Jan 20 06:39:40.448852 update_engine[1624]: Jan 20 06:39:40.448852 update_engine[1624]: Jan 20 06:39:40.448852 update_engine[1624]: Jan 20 06:39:40.448852 update_engine[1624]: I20260120 06:39:40.448503 1624 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 06:39:40.468421 update_engine[1624]: I20260120 06:39:40.468199 1624 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 06:39:40.470507 update_engine[1624]: I20260120 06:39:40.469782 1624 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 06:39:40.492928 update_engine[1624]: E20260120 06:39:40.492722 1624 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 06:39:40.492928 update_engine[1624]: I20260120 06:39:40.492874 1624 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 06:39:40.495527 locksmithd[1694]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 06:39:42.074866 containerd[1645]: time="2026-01-20T06:39:42.074297837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:39:42.153466 containerd[1645]: time="2026-01-20T06:39:42.153400084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:42.157300 containerd[1645]: time="2026-01-20T06:39:42.156829329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:39:42.157300 containerd[1645]: time="2026-01-20T06:39:42.156929643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:42.158309 kubelet[2865]: E0120 06:39:42.158002 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:42.158309 kubelet[2865]: E0120 06:39:42.158199 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:42.158903 kubelet[2865]: E0120 06:39:42.158424 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdvt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f8db8dd5b-nqfrx_calico-apiserver(fdd5baaa-865a-43eb-a3a6-626c707ee467): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:42.160183 kubelet[2865]: E0120 06:39:42.160124 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:39:44.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.35:22-10.0.0.1:55734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:44.299677 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:55734.service - OpenSSH per-connection server daemon (10.0.0.1:55734). Jan 20 06:39:44.317195 kernel: kauditd_printk_skb: 80 callbacks suppressed Jan 20 06:39:44.317425 kernel: audit: type=1130 audit(1768891184.299:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.35:22-10.0.0.1:55734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:44.516000 audit[5593]: USER_ACCT pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.519351 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 55734 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:39:44.521487 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:39:44.537707 systemd-logind[1623]: New session 9 of user core. Jan 20 06:39:44.518000 audit[5593]: CRED_ACQ pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.565019 kernel: audit: type=1101 audit(1768891184.516:762): pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.565388 kernel: audit: type=1103 audit(1768891184.518:763): pid=5593 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.565428 kernel: audit: type=1006 audit(1768891184.518:764): pid=5593 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 20 06:39:44.565925 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 06:39:44.581329 kernel: audit: type=1300 audit(1768891184.518:764): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde148bde0 a2=3 a3=0 items=0 ppid=1 pid=5593 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:44.518000 audit[5593]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde148bde0 a2=3 a3=0 items=0 ppid=1 pid=5593 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:44.622537 kernel: audit: type=1327 audit(1768891184.518:764): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:39:44.518000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:39:44.574000 audit[5593]: USER_START pid=5593 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.574000 audit[5597]: CRED_ACQ pid=5597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.674021 kernel: audit: type=1105 audit(1768891184.574:765): pid=5593 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.675196 kernel: audit: type=1103 audit(1768891184.574:766): pid=5597 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.882886 sshd[5597]: Connection closed by 10.0.0.1 port 55734 Jan 20 06:39:44.883370 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Jan 20 06:39:44.884000 audit[5593]: USER_END pid=5593 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.891533 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:55734.service: Deactivated successfully. Jan 20 06:39:44.895589 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 06:39:44.898663 systemd-logind[1623]: Session 9 logged out. Waiting for processes to exit. Jan 20 06:39:44.901890 systemd-logind[1623]: Removed session 9. Jan 20 06:39:44.884000 audit[5593]: CRED_DISP pid=5593 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.950679 kernel: audit: type=1106 audit(1768891184.884:767): pid=5593 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.951908 kernel: audit: type=1104 audit(1768891184.884:768): pid=5593 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:44.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.35:22-10.0.0.1:55734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:49.072397 containerd[1645]: time="2026-01-20T06:39:49.072337278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:39:49.145560 containerd[1645]: time="2026-01-20T06:39:49.144606336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:49.153688 containerd[1645]: time="2026-01-20T06:39:49.153624508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:39:49.153953 containerd[1645]: time="2026-01-20T06:39:49.153691448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:49.155402 kubelet[2865]: E0120 06:39:49.155195 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:39:49.155402 kubelet[2865]: E0120 06:39:49.155245 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:39:49.155917 kubelet[2865]: E0120 06:39:49.155533 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk6rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:49.162073 containerd[1645]: time="2026-01-20T06:39:49.161836670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:39:49.233659 containerd[1645]: time="2026-01-20T06:39:49.233341619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:49.240844 containerd[1645]: time="2026-01-20T06:39:49.240448001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:39:49.240844 containerd[1645]: time="2026-01-20T06:39:49.240567073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:49.241755 kubelet[2865]: E0120 06:39:49.241463 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:39:49.241755 kubelet[2865]: E0120 06:39:49.241591 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:39:49.241902 kubelet[2865]: E0120 06:39:49.241730 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk6rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:49.244339 kubelet[2865]: E0120 06:39:49.243994 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:39:49.902609 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:53976.service - OpenSSH per-connection server daemon (10.0.0.1:53976). Jan 20 06:39:49.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.35:22-10.0.0.1:53976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:49.907912 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:39:49.909714 kernel: audit: type=1130 audit(1768891189.901:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.35:22-10.0.0.1:53976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:50.015000 audit[5617]: USER_ACCT pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.016773 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 53976 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:39:50.020813 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:39:50.031771 systemd-logind[1623]: New session 10 of user core. Jan 20 06:39:50.041522 kernel: audit: type=1101 audit(1768891190.015:771): pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.041635 kernel: audit: type=1103 audit(1768891190.017:772): pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.017000 audit[5617]: CRED_ACQ pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.077947 containerd[1645]: time="2026-01-20T06:39:50.077567495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:39:50.080482 kernel: audit: type=1006 audit(1768891190.017:773): pid=5617 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 20 06:39:50.080560 kernel: audit: type=1300 audit(1768891190.017:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5a81c520 a2=3 a3=0 items=0 ppid=1 pid=5617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:50.017000 audit[5617]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5a81c520 a2=3 a3=0 items=0 ppid=1 pid=5617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:50.017000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:39:50.118711 kernel: audit: type=1327 audit(1768891190.017:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:39:50.125811 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 06:39:50.129000 audit[5617]: USER_START pid=5617 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.149880 containerd[1645]: time="2026-01-20T06:39:50.149384768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:50.154604 containerd[1645]: time="2026-01-20T06:39:50.153573490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:39:50.154604 containerd[1645]: time="2026-01-20T06:39:50.153660182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:50.156212 kubelet[2865]: E0120 06:39:50.155968 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:39:50.159980 kubelet[2865]: E0120 06:39:50.159452 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:39:50.161166 kubelet[2865]: E0120 06:39:50.160981 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qqqgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54fdff59b4-bvgmz_calico-system(1fb741a2-9573-41fd-9b50-18c9b4a4a79a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:50.133000 audit[5621]: CRED_ACQ pid=5621 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.163219 containerd[1645]: time="2026-01-20T06:39:50.161804179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:39:50.165570 kubelet[2865]: E0120 06:39:50.165145 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:39:50.185756 kernel: audit: type=1105 audit(1768891190.129:774): pid=5617 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.185913 kernel: audit: type=1103 audit(1768891190.133:775): pid=5621 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.253601 containerd[1645]: time="2026-01-20T06:39:50.253482788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:50.256192 containerd[1645]: time="2026-01-20T06:39:50.256148340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:39:50.256808 containerd[1645]: time="2026-01-20T06:39:50.256220999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:50.257875 kubelet[2865]: E0120 06:39:50.257550 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:50.257875 kubelet[2865]: E0120 06:39:50.257621 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:50.257875 kubelet[2865]: E0120 06:39:50.257812 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tlcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f8db8dd5b-5v8sm_calico-apiserver(8605c7f4-dda9-48f9-8faf-f356da42c13a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:50.259531 kubelet[2865]: E0120 06:39:50.259482 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:39:50.307171 sshd[5621]: Connection closed by 10.0.0.1 port 53976 Jan 20 06:39:50.307683 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Jan 20 06:39:50.309000 audit[5617]: USER_END pid=5617 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.313518 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:53976.service: Deactivated successfully. Jan 20 06:39:50.317876 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 06:39:50.320266 systemd-logind[1623]: Session 10 logged out. Waiting for processes to exit. Jan 20 06:39:50.322950 systemd-logind[1623]: Removed session 10. Jan 20 06:39:50.331949 update_engine[1624]: I20260120 06:39:50.331225 1624 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 06:39:50.331949 update_engine[1624]: I20260120 06:39:50.331380 1624 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 06:39:50.331949 update_engine[1624]: I20260120 06:39:50.331889 1624 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 06:39:50.309000 audit[5617]: CRED_DISP pid=5617 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.348134 update_engine[1624]: E20260120 06:39:50.347924 1624 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 06:39:50.348210 update_engine[1624]: I20260120 06:39:50.348191 1624 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 06:39:50.357431 kernel: audit: type=1106 audit(1768891190.309:776): pid=5617 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.357546 kernel: audit: type=1104 audit(1768891190.309:777): pid=5617 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:50.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.35:22-10.0.0.1:53976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:51.165446 kubelet[2865]: E0120 06:39:51.165357 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:39:52.076212 kubelet[2865]: E0120 06:39:52.075457 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:39:52.078243 kubelet[2865]: E0120 06:39:52.077363 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:39:53.073830 containerd[1645]: time="2026-01-20T06:39:53.073232526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:39:53.143988 containerd[1645]: time="2026-01-20T06:39:53.143546903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:39:53.150694 containerd[1645]: time="2026-01-20T06:39:53.150570803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:39:53.151660 containerd[1645]: time="2026-01-20T06:39:53.150924963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:39:53.151996 kubelet[2865]: E0120 06:39:53.151887 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:53.152662 kubelet[2865]: E0120 06:39:53.152404 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:39:53.152844 kubelet[2865]: E0120 06:39:53.152747 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wzwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bb7ff584c-brrnn_calico-apiserver(1b97c41d-4ead-4c93-97f0-70532331e2e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:39:53.154195 kubelet[2865]: E0120 06:39:53.153963 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:39:55.325507 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:38686.service - OpenSSH per-connection server daemon (10.0.0.1:38686). Jan 20 06:39:55.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.35:22-10.0.0.1:38686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:55.332372 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:39:55.332543 kernel: audit: type=1130 audit(1768891195.325:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.35:22-10.0.0.1:38686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:55.457000 audit[5672]: USER_ACCT pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.459247 sshd[5672]: Accepted publickey for core from 10.0.0.1 port 38686 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:39:55.462736 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:39:55.473951 systemd-logind[1623]: New session 11 of user core. Jan 20 06:39:55.459000 audit[5672]: CRED_ACQ pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.511476 kernel: audit: type=1101 audit(1768891195.457:780): pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.511537 kernel: audit: type=1103 audit(1768891195.459:781): pid=5672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.511645 kernel: audit: type=1006 audit(1768891195.459:782): pid=5672 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 06:39:55.524380 kernel: audit: type=1300 audit(1768891195.459:782): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4c5cfb40 a2=3 a3=0 items=0 ppid=1 pid=5672 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:55.459000 audit[5672]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4c5cfb40 a2=3 a3=0 items=0 ppid=1 pid=5672 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:39:55.546281 kernel: audit: type=1327 audit(1768891195.459:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:39:55.459000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:39:55.556471 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 06:39:55.559000 audit[5672]: USER_START pid=5672 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.591016 kernel: audit: type=1105 audit(1768891195.559:783): pid=5672 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.592945 kernel: audit: type=1103 audit(1768891195.560:784): pid=5677 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.560000 audit[5677]: CRED_ACQ pid=5677 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.756011 sshd[5677]: Connection closed by 10.0.0.1 port 38686 Jan 20 06:39:55.756740 sshd-session[5672]: pam_unix(sshd:session): session closed for user core Jan 20 06:39:55.758000 audit[5672]: USER_END pid=5672 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.763935 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:38686.service: Deactivated successfully. Jan 20 06:39:55.767653 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 06:39:55.770159 systemd-logind[1623]: Session 11 logged out. Waiting for processes to exit. Jan 20 06:39:55.772989 systemd-logind[1623]: Removed session 11. Jan 20 06:39:55.758000 audit[5672]: CRED_DISP pid=5672 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.808967 kernel: audit: type=1106 audit(1768891195.758:785): pid=5672 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.809218 kernel: audit: type=1104 audit(1768891195.758:786): pid=5672 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:39:55.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.35:22-10.0.0.1:38686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:39:56.073982 kubelet[2865]: E0120 06:39:56.073840 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:40:00.331850 update_engine[1624]: I20260120 06:40:00.331671 1624 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 06:40:00.331850 update_engine[1624]: I20260120 06:40:00.331841 1624 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 06:40:00.332775 update_engine[1624]: I20260120 06:40:00.332657 1624 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 06:40:00.348142 update_engine[1624]: E20260120 06:40:00.347954 1624 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 06:40:00.348680 update_engine[1624]: I20260120 06:40:00.348496 1624 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 06:40:00.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.35:22-10.0.0.1:38702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:00.797758 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:38702.service - OpenSSH per-connection server daemon (10.0.0.1:38702). Jan 20 06:40:00.804656 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:40:00.804803 kernel: audit: type=1130 audit(1768891200.797:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.35:22-10.0.0.1:38702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:00.920000 audit[5693]: USER_ACCT pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:00.921749 sshd[5693]: Accepted publickey for core from 10.0.0.1 port 38702 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:40:00.926434 sshd-session[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:00.937716 systemd-logind[1623]: New session 12 of user core. Jan 20 06:40:00.950443 kernel: audit: type=1101 audit(1768891200.920:789): pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:00.950547 kernel: audit: type=1103 audit(1768891200.923:790): pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:00.923000 audit[5693]: CRED_ACQ pid=5693 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:00.989811 kernel: audit: type=1006 audit(1768891200.923:791): pid=5693 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 06:40:00.989948 kernel: audit: type=1300 audit(1768891200.923:791): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3015ee80 a2=3 a3=0 items=0 ppid=1 pid=5693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:00.923000 audit[5693]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3015ee80 a2=3 a3=0 items=0 ppid=1 pid=5693 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:00.923000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:01.022494 kernel: audit: type=1327 audit(1768891200.923:791): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:01.033759 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 06:40:01.038000 audit[5693]: USER_START pid=5693 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.066273 kernel: audit: type=1105 audit(1768891201.038:792): pid=5693 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.066483 kernel: audit: type=1103 audit(1768891201.041:793): pid=5699 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.041000 audit[5699]: CRED_ACQ pid=5699 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.189838 sshd[5699]: Connection closed by 10.0.0.1 port 38702 Jan 20 06:40:01.190967 sshd-session[5693]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:01.192000 audit[5693]: USER_END pid=5693 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.193000 audit[5693]: CRED_DISP pid=5693 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.243612 kernel: audit: type=1106 audit(1768891201.192:794): pid=5693 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.243678 kernel: audit: type=1104 audit(1768891201.193:795): pid=5693 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.258240 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:38702.service: Deactivated successfully. Jan 20 06:40:01.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.35:22-10.0.0.1:38702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:01.261449 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 06:40:01.263517 systemd-logind[1623]: Session 12 logged out. Waiting for processes to exit. Jan 20 06:40:01.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.35:22-10.0.0.1:38712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:01.269580 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:38712.service - OpenSSH per-connection server daemon (10.0.0.1:38712). Jan 20 06:40:01.271624 systemd-logind[1623]: Removed session 12. Jan 20 06:40:01.352000 audit[5713]: USER_ACCT pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.354740 sshd[5713]: Accepted publickey for core from 10.0.0.1 port 38712 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:40:01.354000 audit[5713]: CRED_ACQ pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.355000 audit[5713]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb8565200 a2=3 a3=0 items=0 ppid=1 pid=5713 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:01.355000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:01.357906 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:01.368667 systemd-logind[1623]: New session 13 of user core. Jan 20 06:40:01.381653 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 06:40:01.387000 audit[5713]: USER_START pid=5713 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.392000 audit[5717]: CRED_ACQ pid=5717 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.651301 sshd[5717]: Connection closed by 10.0.0.1 port 38712 Jan 20 06:40:01.652912 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:01.657000 audit[5713]: USER_END pid=5713 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.658000 audit[5713]: CRED_DISP pid=5713 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.667662 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:38712.service: Deactivated successfully. Jan 20 06:40:01.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.35:22-10.0.0.1:38712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:01.672722 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 06:40:01.680567 systemd-logind[1623]: Session 13 logged out. Waiting for processes to exit. Jan 20 06:40:01.683516 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:38726.service - OpenSSH per-connection server daemon (10.0.0.1:38726). Jan 20 06:40:01.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.35:22-10.0.0.1:38726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:01.689501 systemd-logind[1623]: Removed session 13. Jan 20 06:40:01.767000 audit[5728]: USER_ACCT pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.770018 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 38726 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:40:01.770000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.770000 audit[5728]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec8172610 a2=3 a3=0 items=0 ppid=1 pid=5728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:01.770000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:01.773266 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:01.784963 systemd-logind[1623]: New session 14 of user core. Jan 20 06:40:01.800487 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 06:40:01.806000 audit[5728]: USER_START pid=5728 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.810000 audit[5732]: CRED_ACQ pid=5732 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.971904 sshd[5732]: Connection closed by 10.0.0.1 port 38726 Jan 20 06:40:01.972432 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:01.974000 audit[5728]: USER_END pid=5728 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.974000 audit[5728]: CRED_DISP pid=5728 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:01.979191 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:38726.service: Deactivated successfully. Jan 20 06:40:01.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.35:22-10.0.0.1:38726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:01.982471 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 06:40:01.985478 systemd-logind[1623]: Session 14 logged out. Waiting for processes to exit. Jan 20 06:40:01.987926 systemd-logind[1623]: Removed session 14. Jan 20 06:40:02.075325 kubelet[2865]: E0120 06:40:02.074564 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:40:04.076987 kubelet[2865]: E0120 06:40:04.076607 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:40:04.081601 containerd[1645]: time="2026-01-20T06:40:04.080778238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:40:04.085330 kubelet[2865]: E0120 06:40:04.084783 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:40:04.198456 containerd[1645]: time="2026-01-20T06:40:04.194546262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:04.222875 containerd[1645]: time="2026-01-20T06:40:04.222735989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:40:04.223294 containerd[1645]: time="2026-01-20T06:40:04.222899324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:04.229229 kubelet[2865]: E0120 06:40:04.224700 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:40:04.230812 kubelet[2865]: E0120 06:40:04.229338 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:40:04.234202 kubelet[2865]: E0120 06:40:04.233973 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g29tx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-grqpc_calico-system(1d1bd19b-efe8-47e1-8a7a-7256f246c0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:04.236185 kubelet[2865]: E0120 06:40:04.235853 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:40:04.240020 containerd[1645]: time="2026-01-20T06:40:04.239494410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:40:04.314830 containerd[1645]: time="2026-01-20T06:40:04.314767915Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:04.327534 containerd[1645]: time="2026-01-20T06:40:04.327332400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:40:04.328818 containerd[1645]: time="2026-01-20T06:40:04.328579166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:04.338946 kubelet[2865]: E0120 06:40:04.329528 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:40:04.338946 kubelet[2865]: E0120 06:40:04.335591 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:40:04.338946 kubelet[2865]: E0120 06:40:04.337005 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8c510b35c9db4f5cba555b64598fab18,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cx8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7688649cc6-vz554_calico-system(85a3d7fc-92d2-477e-a3c6-cf998fc60fae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:04.349322 containerd[1645]: time="2026-01-20T06:40:04.349243102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:40:04.429819 containerd[1645]: time="2026-01-20T06:40:04.429317123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:04.438834 containerd[1645]: time="2026-01-20T06:40:04.438691447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:40:04.438834 containerd[1645]: time="2026-01-20T06:40:04.438799950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:04.441445 kubelet[2865]: E0120 06:40:04.439646 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:40:04.441445 kubelet[2865]: E0120 06:40:04.439703 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:40:04.441445 kubelet[2865]: E0120 06:40:04.439828 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cx8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7688649cc6-vz554_calico-system(85a3d7fc-92d2-477e-a3c6-cf998fc60fae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:04.441883 kubelet[2865]: E0120 06:40:04.441487 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:40:05.107731 kubelet[2865]: E0120 06:40:05.105898 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:40:07.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.35:22-10.0.0.1:37880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:07.017612 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:37880.service - OpenSSH per-connection server daemon (10.0.0.1:37880). Jan 20 06:40:07.039876 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 06:40:07.040307 kernel: audit: type=1130 audit(1768891207.016:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.35:22-10.0.0.1:37880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:07.133581 containerd[1645]: time="2026-01-20T06:40:07.128791221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:40:07.348694 containerd[1645]: time="2026-01-20T06:40:07.343689819Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:07.359837 containerd[1645]: time="2026-01-20T06:40:07.359739724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:40:07.359837 containerd[1645]: time="2026-01-20T06:40:07.359823781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:07.372265 kubelet[2865]: E0120 06:40:07.371719 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:40:07.372265 kubelet[2865]: E0120 06:40:07.371959 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:40:07.386883 kubelet[2865]: E0120 06:40:07.372347 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdvt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f8db8dd5b-nqfrx_calico-apiserver(fdd5baaa-865a-43eb-a3a6-626c707ee467): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:07.386883 kubelet[2865]: E0120 06:40:07.376981 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:40:07.624000 audit[5747]: USER_ACCT pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:07.637014 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 37880 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:40:07.651301 sshd-session[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:07.715304 systemd-logind[1623]: New session 15 of user core. Jan 20 06:40:07.732352 kernel: audit: type=1101 audit(1768891207.624:816): pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:07.646000 audit[5747]: CRED_ACQ pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:07.760682 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 06:40:07.846309 kernel: audit: type=1103 audit(1768891207.646:817): pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:07.955312 kernel: audit: type=1006 audit(1768891207.647:818): pid=5747 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 20 06:40:07.955547 kernel: audit: type=1300 audit(1768891207.647:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7cab3640 a2=3 a3=0 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:07.647000 audit[5747]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7cab3640 a2=3 a3=0 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:07.647000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:07.984707 kernel: audit: type=1327 audit(1768891207.647:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:08.060210 kernel: audit: type=1105 audit(1768891207.803:819): pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:07.803000 audit[5747]: USER_START pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:07.812000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:08.120903 kernel: audit: type=1103 audit(1768891207.812:820): pid=5751 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:08.598718 sshd[5751]: Connection closed by 10.0.0.1 port 37880 Jan 20 06:40:08.617781 sshd-session[5747]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:08.628000 audit[5747]: USER_END pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:08.688553 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:37880.service: Deactivated successfully. Jan 20 06:40:08.696761 kernel: audit: type=1106 audit(1768891208.628:821): pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:08.737753 kernel: audit: type=1104 audit(1768891208.628:822): pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:08.628000 audit[5747]: CRED_DISP pid=5747 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:08.743007 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 06:40:08.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.35:22-10.0.0.1:37880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:08.786949 systemd-logind[1623]: Session 15 logged out. Waiting for processes to exit. Jan 20 06:40:08.806781 systemd-logind[1623]: Removed session 15. Jan 20 06:40:10.335786 update_engine[1624]: I20260120 06:40:10.335716 1624 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 06:40:10.357611 update_engine[1624]: I20260120 06:40:10.342355 1624 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 06:40:10.357611 update_engine[1624]: I20260120 06:40:10.345889 1624 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 06:40:10.371623 update_engine[1624]: E20260120 06:40:10.371591 1624 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 06:40:10.371794 update_engine[1624]: I20260120 06:40:10.371773 1624 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 06:40:10.371854 update_engine[1624]: I20260120 06:40:10.371838 1624 omaha_request_action.cc:617] Omaha request response: Jan 20 06:40:10.371991 update_engine[1624]: E20260120 06:40:10.371973 1624 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 06:40:10.395009 update_engine[1624]: I20260120 06:40:10.390358 1624 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 06:40:10.395009 update_engine[1624]: I20260120 06:40:10.390380 1624 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 06:40:10.395009 update_engine[1624]: I20260120 06:40:10.390517 1624 update_attempter.cc:306] Processing Done. Jan 20 06:40:10.395009 update_engine[1624]: E20260120 06:40:10.390537 1624 update_attempter.cc:619] Update failed. Jan 20 06:40:10.395009 update_engine[1624]: I20260120 06:40:10.390549 1624 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 06:40:10.395009 update_engine[1624]: I20260120 06:40:10.390558 1624 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 06:40:10.395009 update_engine[1624]: I20260120 06:40:10.390570 1624 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 06:40:10.395009 update_engine[1624]: I20260120 06:40:10.390650 1624 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 06:40:10.395009 update_engine[1624]: I20260120 06:40:10.390676 1624 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 06:40:10.395009 update_engine[1624]: I20260120 06:40:10.390687 1624 omaha_request_action.cc:272] Request: Jan 20 06:40:10.395009 update_engine[1624]: Jan 20 06:40:10.395009 update_engine[1624]: Jan 20 06:40:10.395009 update_engine[1624]: Jan 20 06:40:10.395009 update_engine[1624]: Jan 20 06:40:10.395009 update_engine[1624]: Jan 20 06:40:10.395009 update_engine[1624]: Jan 20 06:40:10.395009 update_engine[1624]: I20260120 06:40:10.390699 1624 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 06:40:10.395009 update_engine[1624]: I20260120 06:40:10.390733 1624 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 06:40:10.441780 update_engine[1624]: I20260120 06:40:10.409550 1624 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 06:40:10.441815 locksmithd[1694]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 06:40:10.453736 update_engine[1624]: E20260120 06:40:10.452908 1624 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 06:40:10.453736 update_engine[1624]: I20260120 06:40:10.453001 1624 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 06:40:10.453736 update_engine[1624]: I20260120 06:40:10.453013 1624 omaha_request_action.cc:617] Omaha request response: Jan 20 06:40:10.453736 update_engine[1624]: I20260120 06:40:10.453256 1624 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 06:40:10.453736 update_engine[1624]: I20260120 06:40:10.453271 1624 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 06:40:10.453736 update_engine[1624]: I20260120 06:40:10.453279 1624 update_attempter.cc:306] Processing Done. Jan 20 06:40:10.453736 update_engine[1624]: I20260120 06:40:10.453289 1624 update_attempter.cc:310] Error event sent. Jan 20 06:40:10.453736 update_engine[1624]: I20260120 06:40:10.453303 1624 update_check_scheduler.cc:74] Next update check in 41m31s Jan 20 06:40:10.471907 locksmithd[1694]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 06:40:13.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.35:22-10.0.0.1:37888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:13.633011 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:37888.service - OpenSSH per-connection server daemon (10.0.0.1:37888). Jan 20 06:40:13.647761 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:40:13.647814 kernel: audit: type=1130 audit(1768891213.633:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.35:22-10.0.0.1:37888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:14.082000 audit[5774]: USER_ACCT pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:14.102005 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:14.146014 sshd[5774]: Accepted publickey for core from 10.0.0.1 port 37888 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:40:14.146772 kernel: audit: type=1101 audit(1768891214.082:825): pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:14.094000 audit[5774]: CRED_ACQ pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:14.222559 systemd-logind[1623]: New session 16 of user core. Jan 20 06:40:14.353775 kernel: audit: type=1103 audit(1768891214.094:826): pid=5774 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:14.353907 kernel: audit: type=1006 audit(1768891214.094:827): pid=5774 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 06:40:14.353946 kernel: audit: type=1300 audit(1768891214.094:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1b9cf360 a2=3 a3=0 items=0 ppid=1 pid=5774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:14.094000 audit[5774]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1b9cf360 a2=3 a3=0 items=0 ppid=1 pid=5774 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:14.422787 kernel: audit: type=1327 audit(1768891214.094:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:14.094000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:14.419346 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 06:40:14.490000 audit[5774]: USER_START pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:14.595340 kernel: audit: type=1105 audit(1768891214.490:828): pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:14.595595 kernel: audit: type=1103 audit(1768891214.498:829): pid=5779 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:14.498000 audit[5779]: CRED_ACQ pid=5779 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:15.083976 kubelet[2865]: E0120 06:40:15.082388 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:40:15.123825 kubelet[2865]: E0120 06:40:15.120848 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:40:15.159784 containerd[1645]: time="2026-01-20T06:40:15.158958474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:40:15.182307 kubelet[2865]: E0120 06:40:15.167577 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:40:15.527828 containerd[1645]: time="2026-01-20T06:40:15.522397124Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:15.603806 containerd[1645]: time="2026-01-20T06:40:15.589311233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:40:15.603806 containerd[1645]: time="2026-01-20T06:40:15.601579337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:15.604605 kubelet[2865]: E0120 06:40:15.604563 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:40:15.604730 kubelet[2865]: E0120 06:40:15.604710 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:40:15.605408 kubelet[2865]: E0120 06:40:15.604974 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk6rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:15.636589 containerd[1645]: time="2026-01-20T06:40:15.636401297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:40:15.827667 containerd[1645]: time="2026-01-20T06:40:15.819811518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:15.876008 containerd[1645]: time="2026-01-20T06:40:15.852905665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:40:15.876008 containerd[1645]: time="2026-01-20T06:40:15.853312595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:15.876703 kubelet[2865]: E0120 06:40:15.856976 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:40:15.876703 kubelet[2865]: E0120 06:40:15.857307 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:40:15.876703 kubelet[2865]: E0120 06:40:15.857910 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk6rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:15.876703 kubelet[2865]: E0120 06:40:15.859839 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:40:16.074917 kubelet[2865]: E0120 06:40:16.073972 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:40:16.231947 sshd[5779]: Connection closed by 10.0.0.1 port 37888 Jan 20 06:40:16.234645 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:16.269000 audit[5774]: USER_END pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:16.298587 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:37888.service: Deactivated successfully. Jan 20 06:40:16.299890 systemd-logind[1623]: Session 16 logged out. Waiting for processes to exit. Jan 20 06:40:16.324837 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 06:40:16.385819 kernel: audit: type=1106 audit(1768891216.269:830): pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:16.269000 audit[5774]: CRED_DISP pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:16.391990 systemd-logind[1623]: Removed session 16. Jan 20 06:40:16.528967 kernel: audit: type=1104 audit(1768891216.269:831): pid=5774 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:16.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.35:22-10.0.0.1:37888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:17.168356 containerd[1645]: time="2026-01-20T06:40:17.167771236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:40:17.537669 containerd[1645]: time="2026-01-20T06:40:17.532905370Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:17.548950 containerd[1645]: time="2026-01-20T06:40:17.547637679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:40:17.548950 containerd[1645]: time="2026-01-20T06:40:17.547849485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:17.549752 kubelet[2865]: E0120 06:40:17.548838 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:40:17.549752 kubelet[2865]: E0120 06:40:17.548891 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:40:17.553590 kubelet[2865]: E0120 06:40:17.552643 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wzwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bb7ff584c-brrnn_calico-apiserver(1b97c41d-4ead-4c93-97f0-70532331e2e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:17.554917 kubelet[2865]: E0120 06:40:17.554401 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:40:17.554973 containerd[1645]: time="2026-01-20T06:40:17.553704149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:40:17.889679 containerd[1645]: time="2026-01-20T06:40:17.885819628Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:17.949366 containerd[1645]: time="2026-01-20T06:40:17.944919024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:40:17.949366 containerd[1645]: time="2026-01-20T06:40:17.945669784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:17.949764 kubelet[2865]: E0120 06:40:17.947710 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:40:17.949764 kubelet[2865]: E0120 06:40:17.947767 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:40:17.949764 kubelet[2865]: E0120 06:40:17.947909 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tlcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f8db8dd5b-5v8sm_calico-apiserver(8605c7f4-dda9-48f9-8faf-f356da42c13a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:17.949764 kubelet[2865]: E0120 06:40:17.949340 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:40:18.108729 containerd[1645]: time="2026-01-20T06:40:18.107617636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:40:18.315959 containerd[1645]: time="2026-01-20T06:40:18.312617855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:18.326610 containerd[1645]: time="2026-01-20T06:40:18.324636191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:40:18.326610 containerd[1645]: time="2026-01-20T06:40:18.324870178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:18.329711 kubelet[2865]: E0120 06:40:18.329668 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:40:18.334638 kubelet[2865]: E0120 06:40:18.331601 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:40:18.334638 kubelet[2865]: E0120 06:40:18.331779 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qqqgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54fdff59b4-bvgmz_calico-system(1fb741a2-9573-41fd-9b50-18c9b4a4a79a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:18.337997 kubelet[2865]: E0120 06:40:18.337920 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:40:19.094636 kubelet[2865]: E0120 06:40:19.090984 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:40:21.394646 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:40:21.394787 kernel: audit: type=1130 audit(1768891221.352:833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.35:22-10.0.0.1:33642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:21.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.35:22-10.0.0.1:33642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:21.353810 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:33642.service - OpenSSH per-connection server daemon (10.0.0.1:33642). Jan 20 06:40:22.101000 audit[5805]: USER_ACCT pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:22.217825 sshd[5805]: Accepted publickey for core from 10.0.0.1 port 33642 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:40:22.255445 sshd-session[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:22.291394 kernel: audit: type=1101 audit(1768891222.101:834): pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:22.238000 audit[5805]: CRED_ACQ pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:22.324414 systemd-logind[1623]: New session 17 of user core. Jan 20 06:40:22.464817 kernel: audit: type=1103 audit(1768891222.238:835): pid=5805 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:22.465231 kernel: audit: type=1006 audit(1768891222.238:836): pid=5805 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 20 06:40:22.238000 audit[5805]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe09f9e090 a2=3 a3=0 items=0 ppid=1 pid=5805 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:22.815295 kernel: audit: type=1300 audit(1768891222.238:836): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe09f9e090 a2=3 a3=0 items=0 ppid=1 pid=5805 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:22.815438 kernel: audit: type=1327 audit(1768891222.238:836): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:22.238000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:22.852679 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 06:40:22.975000 audit[5805]: USER_START pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:23.111874 kernel: audit: type=1105 audit(1768891222.975:837): pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:23.010000 audit[5823]: CRED_ACQ pid=5823 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:23.283875 kernel: audit: type=1103 audit(1768891223.010:838): pid=5823 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:24.348743 sshd[5823]: Connection closed by 10.0.0.1 port 33642 Jan 20 06:40:24.351835 sshd-session[5805]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:24.362000 audit[5805]: USER_END pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:24.452428 kernel: audit: type=1106 audit(1768891224.362:839): pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:24.465397 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:33642.service: Deactivated successfully. Jan 20 06:40:24.566900 kernel: audit: type=1104 audit(1768891224.362:840): pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:24.362000 audit[5805]: CRED_DISP pid=5805 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:24.479644 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 06:40:24.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.35:22-10.0.0.1:33642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:24.642002 systemd-logind[1623]: Session 17 logged out. Waiting for processes to exit. Jan 20 06:40:24.654763 systemd-logind[1623]: Removed session 17. Jan 20 06:40:26.208373 kubelet[2865]: E0120 06:40:26.143341 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:40:27.146949 kubelet[2865]: E0120 06:40:27.144953 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:40:29.512457 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:40:29.512785 kernel: audit: type=1130 audit(1768891229.443:842): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.35:22-10.0.0.1:52048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:29.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.35:22-10.0.0.1:52048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:29.443896 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:52048.service - OpenSSH per-connection server daemon (10.0.0.1:52048). Jan 20 06:40:30.113669 kubelet[2865]: E0120 06:40:30.111784 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:40:30.481000 audit[5839]: USER_ACCT pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:30.562996 sshd[5839]: Accepted publickey for core from 10.0.0.1 port 52048 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:40:30.595680 kernel: audit: type=1101 audit(1768891230.481:843): pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:30.595786 kernel: audit: type=1103 audit(1768891230.578:844): pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:30.578000 audit[5839]: CRED_ACQ pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:30.595941 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:30.739861 systemd-logind[1623]: New session 18 of user core. Jan 20 06:40:30.769407 kernel: audit: type=1006 audit(1768891230.578:845): pid=5839 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 20 06:40:30.578000 audit[5839]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe776e20e0 a2=3 a3=0 items=0 ppid=1 pid=5839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:30.774765 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 06:40:30.913438 kernel: audit: type=1300 audit(1768891230.578:845): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe776e20e0 a2=3 a3=0 items=0 ppid=1 pid=5839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:30.578000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:31.002421 kernel: audit: type=1327 audit(1768891230.578:845): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:30.836000 audit[5839]: USER_START pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:31.146880 kubelet[2865]: E0120 06:40:31.100936 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:40:31.146880 kubelet[2865]: E0120 06:40:31.103875 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:40:31.146880 kubelet[2865]: E0120 06:40:31.104413 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:40:31.151483 kernel: audit: type=1105 audit(1768891230.836:846): pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:31.239418 kernel: audit: type=1103 audit(1768891230.862:847): pid=5843 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:30.862000 audit[5843]: CRED_ACQ pid=5843 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:32.077821 sshd[5843]: Connection closed by 10.0.0.1 port 52048 Jan 20 06:40:32.044982 sshd-session[5839]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:32.092000 audit[5839]: USER_END pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:32.118017 systemd-logind[1623]: Session 18 logged out. Waiting for processes to exit. Jan 20 06:40:32.121437 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:52048.service: Deactivated successfully. Jan 20 06:40:32.155854 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 06:40:32.162418 systemd-logind[1623]: Removed session 18. Jan 20 06:40:32.228360 kernel: audit: type=1106 audit(1768891232.092:848): pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:32.231837 kernel: audit: type=1104 audit(1768891232.094:849): pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:32.094000 audit[5839]: CRED_DISP pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:32.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.35:22-10.0.0.1:52048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:33.119440 kubelet[2865]: E0120 06:40:33.107447 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:40:33.119440 kubelet[2865]: E0120 06:40:33.110410 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:40:37.073381 kubelet[2865]: E0120 06:40:37.072614 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:40:37.086997 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:56200.service - OpenSSH per-connection server daemon (10.0.0.1:56200). Jan 20 06:40:37.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.35:22-10.0.0.1:56200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:37.107991 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:40:37.108587 kernel: audit: type=1130 audit(1768891237.084:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.35:22-10.0.0.1:56200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:37.658000 audit[5858]: USER_ACCT pid=5858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:37.669374 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 56200 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:40:37.678624 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:37.741877 systemd-logind[1623]: New session 19 of user core. Jan 20 06:40:37.851981 kernel: audit: type=1101 audit(1768891237.658:852): pid=5858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:37.669000 audit[5858]: CRED_ACQ pid=5858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:37.859644 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 06:40:37.959347 kernel: audit: type=1103 audit(1768891237.669:853): pid=5858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:37.970347 kernel: audit: type=1006 audit(1768891237.669:854): pid=5858 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 20 06:40:37.669000 audit[5858]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe60eded10 a2=3 a3=0 items=0 ppid=1 pid=5858 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:38.155287 kernel: audit: type=1300 audit(1768891237.669:854): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe60eded10 a2=3 a3=0 items=0 ppid=1 pid=5858 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:38.175977 kubelet[2865]: E0120 06:40:38.175763 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:40:37.669000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:38.239912 kernel: audit: type=1327 audit(1768891237.669:854): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:37.876000 audit[5858]: USER_START pid=5858 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:38.375908 kernel: audit: type=1105 audit(1768891237.876:855): pid=5858 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:38.376911 kernel: audit: type=1103 audit(1768891237.891:856): pid=5862 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:37.891000 audit[5862]: CRED_ACQ pid=5862 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:38.763628 sshd[5862]: Connection closed by 10.0.0.1 port 56200 Jan 20 06:40:38.767624 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:38.780000 audit[5858]: USER_END pid=5858 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:38.893003 kernel: audit: type=1106 audit(1768891238.780:857): pid=5858 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:38.786000 audit[5858]: CRED_DISP pid=5858 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:39.007537 kernel: audit: type=1104 audit(1768891238.786:858): pid=5858 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:39.093727 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:56200.service: Deactivated successfully. Jan 20 06:40:39.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.35:22-10.0.0.1:56200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:39.098604 systemd-logind[1623]: Session 19 logged out. Waiting for processes to exit. Jan 20 06:40:39.114818 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 06:40:39.137696 systemd-logind[1623]: Removed session 19. Jan 20 06:40:42.115996 kubelet[2865]: E0120 06:40:42.104882 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:40:42.115996 kubelet[2865]: E0120 06:40:42.105587 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:40:42.115996 kubelet[2865]: E0120 06:40:42.107754 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:40:43.121526 kubelet[2865]: E0120 06:40:43.120971 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:40:43.981931 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:40:43.982619 kernel: audit: type=1130 audit(1768891243.866:860): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.35:22-10.0.0.1:56206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:43.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.35:22-10.0.0.1:56206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:43.861933 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:56206.service - OpenSSH per-connection server daemon (10.0.0.1:56206). Jan 20 06:40:44.174550 kubelet[2865]: E0120 06:40:44.173916 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:40:45.077013 kubelet[2865]: E0120 06:40:45.075708 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:40:45.223000 audit[5875]: USER_ACCT pid=5875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:45.297444 systemd-logind[1623]: New session 20 of user core. Jan 20 06:40:45.232946 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:45.304965 sshd[5875]: Accepted publickey for core from 10.0.0.1 port 56206 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:40:45.314468 kernel: audit: type=1101 audit(1768891245.223:861): pid=5875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:45.228000 audit[5875]: CRED_ACQ pid=5875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:45.393579 kernel: audit: type=1103 audit(1768891245.228:862): pid=5875 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:45.395787 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 06:40:45.439866 kernel: audit: type=1006 audit(1768891245.228:863): pid=5875 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 20 06:40:45.228000 audit[5875]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff7e15a90 a2=3 a3=0 items=0 ppid=1 pid=5875 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:45.540890 kernel: audit: type=1300 audit(1768891245.228:863): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff7e15a90 a2=3 a3=0 items=0 ppid=1 pid=5875 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:45.557896 kernel: audit: type=1327 audit(1768891245.228:863): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:45.228000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:45.424000 audit[5875]: USER_START pid=5875 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:45.710853 kernel: audit: type=1105 audit(1768891245.424:864): pid=5875 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:45.434000 audit[5879]: CRED_ACQ pid=5879 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:45.801751 kernel: audit: type=1103 audit(1768891245.434:865): pid=5879 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:46.778425 sshd[5879]: Connection closed by 10.0.0.1 port 56206 Jan 20 06:40:46.774990 sshd-session[5875]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:46.784000 audit[5875]: USER_END pid=5875 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:46.893670 kernel: audit: type=1106 audit(1768891246.784:866): pid=5875 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:46.926920 kernel: audit: type=1104 audit(1768891246.885:867): pid=5875 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:46.885000 audit[5875]: CRED_DISP pid=5875 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:46.915698 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:56206.service: Deactivated successfully. Jan 20 06:40:46.944884 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 06:40:46.963985 systemd-logind[1623]: Session 20 logged out. Waiting for processes to exit. Jan 20 06:40:46.969999 systemd-logind[1623]: Removed session 20. Jan 20 06:40:46.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.35:22-10.0.0.1:56206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:47.115492 kubelet[2865]: E0120 06:40:47.114443 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:40:47.120012 kubelet[2865]: E0120 06:40:47.117660 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:40:50.079804 containerd[1645]: time="2026-01-20T06:40:50.078998147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:40:50.222522 containerd[1645]: time="2026-01-20T06:40:50.214800407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:50.227997 containerd[1645]: time="2026-01-20T06:40:50.227789568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:50.232792 containerd[1645]: time="2026-01-20T06:40:50.227835239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:40:50.233684 kubelet[2865]: E0120 06:40:50.229817 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:40:50.233684 kubelet[2865]: E0120 06:40:50.230773 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:40:50.233684 kubelet[2865]: E0120 06:40:50.230937 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g29tx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-grqpc_calico-system(1d1bd19b-efe8-47e1-8a7a-7256f246c0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:50.235552 kubelet[2865]: E0120 06:40:50.233935 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:40:51.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.35:22-10.0.0.1:43604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:51.800749 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:43604.service - OpenSSH per-connection server daemon (10.0.0.1:43604). Jan 20 06:40:51.881799 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:40:51.881961 kernel: audit: type=1130 audit(1768891251.799:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.35:22-10.0.0.1:43604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:52.152000 audit[5926]: USER_ACCT pid=5926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:52.161558 sshd[5926]: Accepted publickey for core from 10.0.0.1 port 43604 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:40:52.181645 sshd-session[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:52.210494 systemd-logind[1623]: New session 21 of user core. Jan 20 06:40:52.232900 kernel: audit: type=1101 audit(1768891252.152:870): pid=5926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:52.177000 audit[5926]: CRED_ACQ pid=5926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:52.302753 kernel: audit: type=1103 audit(1768891252.177:871): pid=5926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:52.302908 kernel: audit: type=1006 audit(1768891252.177:872): pid=5926 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 20 06:40:52.177000 audit[5926]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaf3c6a60 a2=3 a3=0 items=0 ppid=1 pid=5926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:52.352517 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 06:40:52.429751 kernel: audit: type=1300 audit(1768891252.177:872): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaf3c6a60 a2=3 a3=0 items=0 ppid=1 pid=5926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:52.177000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:52.481783 kernel: audit: type=1327 audit(1768891252.177:872): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:52.364000 audit[5926]: USER_START pid=5926 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:52.583516 kernel: audit: type=1105 audit(1768891252.364:873): pid=5926 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:52.373000 audit[5933]: CRED_ACQ pid=5933 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:52.673777 kernel: audit: type=1103 audit(1768891252.373:874): pid=5933 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:53.173369 sshd[5933]: Connection closed by 10.0.0.1 port 43604 Jan 20 06:40:53.174831 sshd-session[5926]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:53.179000 audit[5926]: USER_END pid=5926 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:53.194010 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:43604.service: Deactivated successfully. Jan 20 06:40:53.196372 systemd-logind[1623]: Session 21 logged out. Waiting for processes to exit. Jan 20 06:40:53.206848 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 06:40:53.215805 systemd-logind[1623]: Removed session 21. Jan 20 06:40:53.266755 kernel: audit: type=1106 audit(1768891253.179:875): pid=5926 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:53.183000 audit[5926]: CRED_DISP pid=5926 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:53.336485 kernel: audit: type=1104 audit(1768891253.183:876): pid=5926 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:53.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.35:22-10.0.0.1:43604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:54.124795 containerd[1645]: time="2026-01-20T06:40:54.122420910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:40:54.236388 containerd[1645]: time="2026-01-20T06:40:54.231718110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:54.258622 containerd[1645]: time="2026-01-20T06:40:54.257485287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:54.258622 containerd[1645]: time="2026-01-20T06:40:54.257677227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:40:54.263565 kubelet[2865]: E0120 06:40:54.262703 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:40:54.263565 kubelet[2865]: E0120 06:40:54.262944 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:40:54.263565 kubelet[2865]: E0120 06:40:54.263511 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8c510b35c9db4f5cba555b64598fab18,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cx8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7688649cc6-vz554_calico-system(85a3d7fc-92d2-477e-a3c6-cf998fc60fae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:54.271353 containerd[1645]: time="2026-01-20T06:40:54.270716155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:40:54.385959 containerd[1645]: time="2026-01-20T06:40:54.378763531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:54.389562 containerd[1645]: time="2026-01-20T06:40:54.388660972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:40:54.389562 containerd[1645]: time="2026-01-20T06:40:54.388776606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:54.390886 kubelet[2865]: E0120 06:40:54.389933 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:40:54.393824 kubelet[2865]: E0120 06:40:54.391785 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:40:54.394780 kubelet[2865]: E0120 06:40:54.394721 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cx8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7688649cc6-vz554_calico-system(85a3d7fc-92d2-477e-a3c6-cf998fc60fae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:54.399875 kubelet[2865]: E0120 06:40:54.399822 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:40:55.089833 containerd[1645]: time="2026-01-20T06:40:55.088698183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:40:55.228798 containerd[1645]: time="2026-01-20T06:40:55.227977391Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:55.257869 containerd[1645]: time="2026-01-20T06:40:55.256681163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:40:55.257869 containerd[1645]: time="2026-01-20T06:40:55.256929022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:55.260998 kubelet[2865]: E0120 06:40:55.259807 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:40:55.260998 kubelet[2865]: E0120 06:40:55.259888 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:40:55.260998 kubelet[2865]: E0120 06:40:55.260537 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdvt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f8db8dd5b-nqfrx_calico-apiserver(fdd5baaa-865a-43eb-a3a6-626c707ee467): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:55.263760 kubelet[2865]: E0120 06:40:55.263662 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:40:56.088698 containerd[1645]: time="2026-01-20T06:40:56.087994248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:40:56.102744 kubelet[2865]: E0120 06:40:56.102700 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:40:56.218816 containerd[1645]: time="2026-01-20T06:40:56.214750941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:56.241810 containerd[1645]: time="2026-01-20T06:40:56.241748294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:40:56.246613 containerd[1645]: time="2026-01-20T06:40:56.244001960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:56.251722 kubelet[2865]: E0120 06:40:56.251675 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:40:56.251867 kubelet[2865]: E0120 06:40:56.251845 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:40:56.252722 kubelet[2865]: E0120 06:40:56.252668 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk6rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:56.278842 containerd[1645]: time="2026-01-20T06:40:56.277923284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:40:56.392801 containerd[1645]: time="2026-01-20T06:40:56.389984394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:56.398921 containerd[1645]: time="2026-01-20T06:40:56.396765855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:40:56.398921 containerd[1645]: time="2026-01-20T06:40:56.397002804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:56.405672 kubelet[2865]: E0120 06:40:56.399817 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:40:56.405672 kubelet[2865]: E0120 06:40:56.399874 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:40:56.405672 kubelet[2865]: E0120 06:40:56.399999 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk6rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kp869_calico-system(67f738e9-ce9e-42e1-a454-66084ff2d3ad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:40:56.405672 kubelet[2865]: E0120 06:40:56.404959 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:40:57.078324 kubelet[2865]: E0120 06:40:57.077923 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:40:58.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.35:22-10.0.0.1:47186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:58.220778 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:47186.service - OpenSSH per-connection server daemon (10.0.0.1:47186). Jan 20 06:40:58.299672 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:40:58.299773 kernel: audit: type=1130 audit(1768891258.219:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.35:22-10.0.0.1:47186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:40:58.542000 audit[5951]: USER_ACCT pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:58.592013 sshd[5951]: Accepted publickey for core from 10.0.0.1 port 47186 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:40:58.622828 sshd-session[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:40:58.623971 kernel: audit: type=1101 audit(1768891258.542:879): pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:58.624700 kernel: audit: type=1103 audit(1768891258.615:880): pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:58.615000 audit[5951]: CRED_ACQ pid=5951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:58.757564 kernel: audit: type=1006 audit(1768891258.615:881): pid=5951 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 06:40:58.615000 audit[5951]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb91b2560 a2=3 a3=0 items=0 ppid=1 pid=5951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:58.900975 kernel: audit: type=1300 audit(1768891258.615:881): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb91b2560 a2=3 a3=0 items=0 ppid=1 pid=5951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:40:58.902727 kernel: audit: type=1327 audit(1768891258.615:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:58.615000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:40:58.785721 systemd-logind[1623]: New session 22 of user core. Jan 20 06:40:58.884801 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 06:40:58.921000 audit[5951]: USER_START pid=5951 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:59.012935 kernel: audit: type=1105 audit(1768891258.921:882): pid=5951 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:58.941000 audit[5955]: CRED_ACQ pid=5955 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:59.096668 kernel: audit: type=1103 audit(1768891258.941:883): pid=5955 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:59.107769 containerd[1645]: time="2026-01-20T06:40:59.107738628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:40:59.917834 sshd[5955]: Connection closed by 10.0.0.1 port 47186 Jan 20 06:40:59.912926 sshd-session[5951]: pam_unix(sshd:session): session closed for user core Jan 20 06:40:59.923828 containerd[1645]: time="2026-01-20T06:40:59.918669314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:40:59.938706 containerd[1645]: time="2026-01-20T06:40:59.937945339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:40:59.941845 containerd[1645]: time="2026-01-20T06:40:59.941019295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:40:59.937000 audit[5951]: USER_END pid=5951 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:59.975734 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:47196.service - OpenSSH per-connection server daemon (10.0.0.1:47196). Jan 20 06:41:00.031006 kubelet[2865]: E0120 06:40:59.946758 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:41:00.031006 kubelet[2865]: E0120 06:40:59.946802 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:41:00.031006 kubelet[2865]: E0120 06:40:59.946907 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tlcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6f8db8dd5b-5v8sm_calico-apiserver(8605c7f4-dda9-48f9-8faf-f356da42c13a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:41:00.031006 kubelet[2865]: E0120 06:40:59.950570 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:40:59.977749 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:47186.service: Deactivated successfully. Jan 20 06:40:59.986662 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 06:40:59.993702 systemd-logind[1623]: Session 22 logged out. Waiting for processes to exit. Jan 20 06:41:00.037783 kernel: audit: type=1106 audit(1768891259.937:884): pid=5951 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:59.997823 systemd-logind[1623]: Removed session 22. Jan 20 06:40:59.940000 audit[5951]: CRED_DISP pid=5951 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:59.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.35:22-10.0.0.1:47196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:00.130843 kernel: audit: type=1104 audit(1768891259.940:885): pid=5951 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:40:59.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.35:22-10.0.0.1:47186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:00.499000 audit[5968]: USER_ACCT pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:00.509804 sshd[5968]: Accepted publickey for core from 10.0.0.1 port 47196 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:41:00.510000 audit[5968]: CRED_ACQ pid=5968 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:00.510000 audit[5968]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7f0a5660 a2=3 a3=0 items=0 ppid=1 pid=5968 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:00.510000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:00.522938 sshd-session[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:00.637965 systemd-logind[1623]: New session 23 of user core. Jan 20 06:41:00.665992 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 06:41:00.778000 audit[5968]: USER_START pid=5968 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:00.817000 audit[5978]: CRED_ACQ pid=5978 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:01.216955 kubelet[2865]: E0120 06:41:01.210822 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:41:03.457617 sshd[5978]: Connection closed by 10.0.0.1 port 47196 Jan 20 06:41:03.461823 sshd-session[5968]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:03.481000 audit[5968]: USER_END pid=5968 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:03.512600 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 20 06:41:03.512704 kernel: audit: type=1106 audit(1768891263.481:893): pid=5968 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:03.529760 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:47204.service - OpenSSH per-connection server daemon (10.0.0.1:47204). Jan 20 06:41:03.552951 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:47196.service: Deactivated successfully. Jan 20 06:41:03.567870 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 06:41:03.570451 systemd[1]: session-23.scope: Consumed 1.042s CPU time, 60.4M memory peak. Jan 20 06:41:03.485000 audit[5968]: CRED_DISP pid=5968 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:03.684007 systemd-logind[1623]: Session 23 logged out. Waiting for processes to exit. Jan 20 06:41:03.695692 systemd-logind[1623]: Removed session 23. Jan 20 06:41:03.747833 kernel: audit: type=1104 audit(1768891263.485:894): pid=5968 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:03.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.35:22-10.0.0.1:47204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:03.834617 kernel: audit: type=1130 audit(1768891263.529:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.35:22-10.0.0.1:47204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:03.834866 kernel: audit: type=1131 audit(1768891263.554:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.35:22-10.0.0.1:47196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:03.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.35:22-10.0.0.1:47196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:04.299000 audit[6007]: USER_ACCT pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:04.326838 sshd-session[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:04.385719 sshd[6007]: Accepted publickey for core from 10.0.0.1 port 47204 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:41:04.389845 systemd-logind[1623]: New session 24 of user core. Jan 20 06:41:04.419821 kernel: audit: type=1101 audit(1768891264.299:897): pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:04.316000 audit[6007]: CRED_ACQ pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:04.498978 kernel: audit: type=1103 audit(1768891264.316:898): pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:04.505963 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 06:41:04.557766 kernel: audit: type=1006 audit(1768891264.317:899): pid=6007 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 06:41:04.583643 kernel: audit: type=1300 audit(1768891264.317:899): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8cc22220 a2=3 a3=0 items=0 ppid=1 pid=6007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:04.317000 audit[6007]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8cc22220 a2=3 a3=0 items=0 ppid=1 pid=6007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:04.685612 kernel: audit: type=1327 audit(1768891264.317:899): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:04.317000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:04.620000 audit[6007]: USER_START pid=6007 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:04.798749 kernel: audit: type=1105 audit(1768891264.620:900): pid=6007 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:04.684000 audit[6014]: CRED_ACQ pid=6014 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:05.263778 kubelet[2865]: E0120 06:41:05.261002 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:41:06.101891 kubelet[2865]: E0120 06:41:06.099949 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:41:09.217764 kubelet[2865]: E0120 06:41:09.198596 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:41:09.226756 containerd[1645]: time="2026-01-20T06:41:09.219996741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:41:09.528883 containerd[1645]: time="2026-01-20T06:41:09.524806201Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:41:09.555774 containerd[1645]: time="2026-01-20T06:41:09.554690378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:41:09.555774 containerd[1645]: time="2026-01-20T06:41:09.555612665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:41:09.570615 kubelet[2865]: E0120 06:41:09.566944 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:41:09.570615 kubelet[2865]: E0120 06:41:09.567630 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:41:09.570615 kubelet[2865]: E0120 06:41:09.567771 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wzwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5bb7ff584c-brrnn_calico-apiserver(1b97c41d-4ead-4c93-97f0-70532331e2e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:41:09.571727 kubelet[2865]: E0120 06:41:09.571700 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:41:09.666608 sshd[6014]: Connection closed by 10.0.0.1 port 47204 Jan 20 06:41:09.661509 sshd-session[6007]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:09.678000 audit[6007]: USER_END pid=6007 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:09.708729 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:41:09.708870 kernel: audit: type=1106 audit(1768891269.678:902): pid=6007 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:09.851688 kernel: audit: type=1104 audit(1768891269.678:903): pid=6007 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:09.678000 audit[6007]: CRED_DISP pid=6007 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:09.865923 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:47204.service: Deactivated successfully. Jan 20 06:41:09.879994 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 06:41:09.881698 systemd[1]: session-24.scope: Consumed 2.158s CPU time, 44.5M memory peak. Jan 20 06:41:09.894762 systemd-logind[1623]: Session 24 logged out. Waiting for processes to exit. Jan 20 06:41:09.954600 systemd[1]: Started sshd@23-10.0.0.35:22-10.0.0.1:53848.service - OpenSSH per-connection server daemon (10.0.0.1:53848). Jan 20 06:41:09.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.35:22-10.0.0.1:47204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:09.991693 systemd-logind[1623]: Removed session 24. Jan 20 06:41:10.058379 kernel: audit: type=1131 audit(1768891269.867:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.35:22-10.0.0.1:47204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:09.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.35:22-10.0.0.1:53848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:10.106608 containerd[1645]: time="2026-01-20T06:41:10.103732845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:41:10.127776 kubelet[2865]: E0120 06:41:10.125670 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:41:10.167707 kernel: audit: type=1130 audit(1768891269.958:905): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.35:22-10.0.0.1:53848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:10.152000 audit[6033]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=6033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:41:10.228343 kernel: audit: type=1325 audit(1768891270.152:906): table=filter:146 family=2 entries=26 op=nft_register_rule pid=6033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:41:10.152000 audit[6033]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe5f36de70 a2=0 a3=7ffe5f36de5c items=0 ppid=2978 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:10.352710 kernel: audit: type=1300 audit(1768891270.152:906): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe5f36de70 a2=0 a3=7ffe5f36de5c items=0 ppid=2978 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:10.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:41:10.407549 kernel: audit: type=1327 audit(1768891270.152:906): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:41:10.281000 audit[6033]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=6033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:41:10.483576 kernel: audit: type=1325 audit(1768891270.281:907): table=nat:147 family=2 entries=20 op=nft_register_rule pid=6033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:41:10.484377 containerd[1645]: time="2026-01-20T06:41:10.483960377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:41:10.281000 audit[6033]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe5f36de70 a2=0 a3=0 items=0 ppid=2978 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:10.514780 containerd[1645]: time="2026-01-20T06:41:10.511612157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:41:10.514780 containerd[1645]: time="2026-01-20T06:41:10.511724475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:41:10.519976 kubelet[2865]: E0120 06:41:10.518613 2865 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:41:10.530589 kubelet[2865]: E0120 06:41:10.528523 2865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:41:10.530589 kubelet[2865]: E0120 06:41:10.529389 2865 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qqqgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54fdff59b4-bvgmz_calico-system(1fb741a2-9573-41fd-9b50-18c9b4a4a79a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:41:10.532006 kubelet[2865]: E0120 06:41:10.531630 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:41:10.639716 kernel: audit: type=1300 audit(1768891270.281:907): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe5f36de70 a2=0 a3=0 items=0 ppid=2978 pid=6033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:10.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:41:10.707478 kernel: audit: type=1327 audit(1768891270.281:907): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:41:10.807000 audit[6032]: USER_ACCT pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:10.814776 sshd[6032]: Accepted publickey for core from 10.0.0.1 port 53848 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:41:10.814000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:10.814000 audit[6032]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfb9d8420 a2=3 a3=0 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:10.814000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:10.819744 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:10.866935 systemd-logind[1623]: New session 25 of user core. Jan 20 06:41:10.838000 audit[6037]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=6037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:41:10.838000 audit[6037]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff1418d120 a2=0 a3=7fff1418d10c items=0 ppid=2978 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:10.838000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:41:10.879963 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 06:41:10.891000 audit[6037]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=6037 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:41:10.891000 audit[6037]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff1418d120 a2=0 a3=0 items=0 ppid=2978 pid=6037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:10.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:41:10.904000 audit[6032]: USER_START pid=6032 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:10.914000 audit[6039]: CRED_ACQ pid=6039 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:12.103540 kubelet[2865]: E0120 06:41:12.101829 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:41:12.178815 sshd[6039]: Connection closed by 10.0.0.1 port 53848 Jan 20 06:41:12.184600 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:12.187000 audit[6032]: USER_END pid=6032 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:12.193000 audit[6032]: CRED_DISP pid=6032 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:12.211907 systemd[1]: sshd@23-10.0.0.35:22-10.0.0.1:53848.service: Deactivated successfully. Jan 20 06:41:12.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.35:22-10.0.0.1:53848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:12.223811 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 06:41:12.228503 systemd-logind[1623]: Session 25 logged out. Waiting for processes to exit. Jan 20 06:41:12.256611 systemd[1]: Started sshd@24-10.0.0.35:22-10.0.0.1:53862.service - OpenSSH per-connection server daemon (10.0.0.1:53862). Jan 20 06:41:12.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.35:22-10.0.0.1:53862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:12.260747 systemd-logind[1623]: Removed session 25. Jan 20 06:41:12.518000 audit[6052]: USER_ACCT pid=6052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:12.523812 sshd[6052]: Accepted publickey for core from 10.0.0.1 port 53862 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:41:12.522000 audit[6052]: CRED_ACQ pid=6052 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:12.525000 audit[6052]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa958ffd0 a2=3 a3=0 items=0 ppid=1 pid=6052 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:12.525000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:12.536406 sshd-session[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:12.550016 systemd-logind[1623]: New session 26 of user core. Jan 20 06:41:12.567539 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 06:41:12.581000 audit[6052]: USER_START pid=6052 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:12.592000 audit[6056]: CRED_ACQ pid=6056 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:13.183320 sshd[6056]: Connection closed by 10.0.0.1 port 53862 Jan 20 06:41:13.183468 sshd-session[6052]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:13.184000 audit[6052]: USER_END pid=6052 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:13.185000 audit[6052]: CRED_DISP pid=6052 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:13.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.35:22-10.0.0.1:53862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:13.193464 systemd-logind[1623]: Session 26 logged out. Waiting for processes to exit. Jan 20 06:41:13.196400 systemd[1]: sshd@24-10.0.0.35:22-10.0.0.1:53862.service: Deactivated successfully. Jan 20 06:41:13.205796 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 06:41:13.212847 systemd-logind[1623]: Removed session 26. Jan 20 06:41:14.074844 kubelet[2865]: E0120 06:41:14.071916 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:41:15.073505 kubelet[2865]: E0120 06:41:15.072549 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:41:16.091681 kubelet[2865]: E0120 06:41:16.084483 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:41:18.231911 systemd[1]: Started sshd@25-10.0.0.35:22-10.0.0.1:34142.service - OpenSSH per-connection server daemon (10.0.0.1:34142). Jan 20 06:41:18.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.35:22-10.0.0.1:34142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:18.273528 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 20 06:41:18.273618 kernel: audit: type=1130 audit(1768891278.232:927): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.35:22-10.0.0.1:34142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:18.630000 audit[6069]: USER_ACCT pid=6069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:18.657745 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 34142 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:41:18.684720 sshd-session[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:18.701462 kernel: audit: type=1101 audit(1768891278.630:928): pid=6069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:18.674000 audit[6069]: CRED_ACQ pid=6069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:18.716684 systemd-logind[1623]: New session 27 of user core. Jan 20 06:41:18.864976 kernel: audit: type=1103 audit(1768891278.674:929): pid=6069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:18.865594 kernel: audit: type=1006 audit(1768891278.675:930): pid=6069 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 20 06:41:18.865626 kernel: audit: type=1300 audit(1768891278.675:930): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7b57d650 a2=3 a3=0 items=0 ppid=1 pid=6069 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:18.675000 audit[6069]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7b57d650 a2=3 a3=0 items=0 ppid=1 pid=6069 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:18.675000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:18.956508 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 06:41:18.991699 kernel: audit: type=1327 audit(1768891278.675:930): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:18.991807 kernel: audit: type=1105 audit(1768891278.979:931): pid=6069 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:18.979000 audit[6069]: USER_START pid=6069 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:18.992000 audit[6073]: CRED_ACQ pid=6073 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:19.217856 kernel: audit: type=1103 audit(1768891278.992:932): pid=6073 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:19.711960 sshd[6073]: Connection closed by 10.0.0.1 port 34142 Jan 20 06:41:19.710818 sshd-session[6069]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:19.717000 audit[6069]: USER_END pid=6069 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:19.725955 systemd-logind[1623]: Session 27 logged out. Waiting for processes to exit. Jan 20 06:41:19.729640 systemd[1]: sshd@25-10.0.0.35:22-10.0.0.1:34142.service: Deactivated successfully. Jan 20 06:41:19.741472 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 06:41:19.745648 systemd-logind[1623]: Removed session 27. Jan 20 06:41:19.850533 kernel: audit: type=1106 audit(1768891279.717:933): pid=6069 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:19.717000 audit[6069]: CRED_DISP pid=6069 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:19.932503 kernel: audit: type=1104 audit(1768891279.717:934): pid=6069 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:19.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.35:22-10.0.0.1:34142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:22.077428 kubelet[2865]: E0120 06:41:22.075757 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:41:24.082687 kubelet[2865]: E0120 06:41:24.079914 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:41:24.104598 kubelet[2865]: E0120 06:41:24.101639 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:41:24.749494 systemd[1]: Started sshd@26-10.0.0.35:22-10.0.0.1:55632.service - OpenSSH per-connection server daemon (10.0.0.1:55632). Jan 20 06:41:24.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.35:22-10.0.0.1:55632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:24.838666 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:41:24.838794 kernel: audit: type=1130 audit(1768891284.749:936): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.35:22-10.0.0.1:55632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:25.101940 kubelet[2865]: E0120 06:41:25.100937 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:41:25.103650 kubelet[2865]: E0120 06:41:25.102495 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:41:25.482000 audit[6112]: USER_ACCT pid=6112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:25.490445 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 55632 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:41:25.497798 sshd-session[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:25.556914 kernel: audit: type=1101 audit(1768891285.482:937): pid=6112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:25.487000 audit[6112]: CRED_ACQ pid=6112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:25.610687 systemd-logind[1623]: New session 28 of user core. Jan 20 06:41:25.632629 kernel: audit: type=1103 audit(1768891285.487:938): pid=6112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:25.632713 kernel: audit: type=1006 audit(1768891285.487:939): pid=6112 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 20 06:41:25.487000 audit[6112]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff55d54210 a2=3 a3=0 items=0 ppid=1 pid=6112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:25.741794 kernel: audit: type=1300 audit(1768891285.487:939): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff55d54210 a2=3 a3=0 items=0 ppid=1 pid=6112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:25.487000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:25.748427 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 06:41:25.772335 kernel: audit: type=1327 audit(1768891285.487:939): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:25.777000 audit[6112]: USER_START pid=6112 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:25.895669 kernel: audit: type=1105 audit(1768891285.777:940): pid=6112 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:25.790000 audit[6116]: CRED_ACQ pid=6116 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:25.990667 kernel: audit: type=1103 audit(1768891285.790:941): pid=6116 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:26.136480 kubelet[2865]: E0120 06:41:26.134811 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:41:26.521540 sshd[6116]: Connection closed by 10.0.0.1 port 55632 Jan 20 06:41:26.522726 sshd-session[6112]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:26.532000 audit[6112]: USER_END pid=6112 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:26.574787 systemd[1]: sshd@26-10.0.0.35:22-10.0.0.1:55632.service: Deactivated successfully. Jan 20 06:41:26.588910 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 06:41:26.608536 systemd-logind[1623]: Session 28 logged out. Waiting for processes to exit. Jan 20 06:41:26.611779 systemd-logind[1623]: Removed session 28. Jan 20 06:41:26.621658 kernel: audit: type=1106 audit(1768891286.532:942): pid=6112 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:26.533000 audit[6112]: CRED_DISP pid=6112 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:26.698595 kernel: audit: type=1104 audit(1768891286.533:943): pid=6112 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:26.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.35:22-10.0.0.1:55632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:28.096385 kubelet[2865]: E0120 06:41:28.090594 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:41:31.608488 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:41:31.608623 kernel: audit: type=1130 audit(1768891291.580:945): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.35:22-10.0.0.1:55646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:31.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.35:22-10.0.0.1:55646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:31.580782 systemd[1]: Started sshd@27-10.0.0.35:22-10.0.0.1:55646.service - OpenSSH per-connection server daemon (10.0.0.1:55646). Jan 20 06:41:31.926486 sshd[6131]: Accepted publickey for core from 10.0.0.1 port 55646 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:41:32.026585 kernel: audit: type=1101 audit(1768891291.922:946): pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:31.922000 audit[6131]: USER_ACCT pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:31.945650 sshd-session[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:31.936000 audit[6131]: CRED_ACQ pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:32.086568 systemd-logind[1623]: New session 29 of user core. Jan 20 06:41:32.102449 kernel: audit: type=1103 audit(1768891291.936:947): pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:32.140702 kernel: audit: type=1006 audit(1768891291.936:948): pid=6131 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 20 06:41:32.141809 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 06:41:31.936000 audit[6131]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3e942a00 a2=3 a3=0 items=0 ppid=1 pid=6131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:32.231517 kernel: audit: type=1300 audit(1768891291.936:948): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3e942a00 a2=3 a3=0 items=0 ppid=1 pid=6131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:31.936000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:32.277750 kernel: audit: type=1327 audit(1768891291.936:948): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:32.168000 audit[6131]: USER_START pid=6131 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:32.376865 kernel: audit: type=1105 audit(1768891292.168:949): pid=6131 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:32.174000 audit[6135]: CRED_ACQ pid=6135 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:32.439551 kernel: audit: type=1103 audit(1768891292.174:950): pid=6135 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:32.735604 sshd[6135]: Connection closed by 10.0.0.1 port 55646 Jan 20 06:41:32.736951 sshd-session[6131]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:32.746000 audit[6131]: USER_END pid=6131 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:32.838762 kernel: audit: type=1106 audit(1768891292.746:951): pid=6131 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:32.841000 audit[6131]: CRED_DISP pid=6131 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:32.878765 systemd[1]: sshd@27-10.0.0.35:22-10.0.0.1:55646.service: Deactivated successfully. Jan 20 06:41:32.889478 systemd-logind[1623]: Session 29 logged out. Waiting for processes to exit. Jan 20 06:41:32.890818 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 06:41:32.912732 kernel: audit: type=1104 audit(1768891292.841:952): pid=6131 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:32.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.35:22-10.0.0.1:55646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:32.919847 systemd-logind[1623]: Removed session 29. Jan 20 06:41:36.082261 kubelet[2865]: E0120 06:41:36.077877 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:41:37.078274 kubelet[2865]: E0120 06:41:37.075583 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:41:37.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.35:22-10.0.0.1:37186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:37.759550 systemd[1]: Started sshd@28-10.0.0.35:22-10.0.0.1:37186.service - OpenSSH per-connection server daemon (10.0.0.1:37186). Jan 20 06:41:37.770581 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:41:37.770669 kernel: audit: type=1130 audit(1768891297.758:954): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.35:22-10.0.0.1:37186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:37.950000 audit[6149]: USER_ACCT pid=6149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:37.953840 sshd[6149]: Accepted publickey for core from 10.0.0.1 port 37186 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:41:37.962336 sshd-session[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:38.002278 kernel: audit: type=1101 audit(1768891297.950:955): pid=6149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:38.002379 kernel: audit: type=1103 audit(1768891297.957:956): pid=6149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:37.957000 audit[6149]: CRED_ACQ pid=6149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:38.006597 systemd-logind[1623]: New session 30 of user core. Jan 20 06:41:38.072336 kubelet[2865]: E0120 06:41:38.070302 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:41:38.083711 kubelet[2865]: E0120 06:41:38.082631 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:41:38.083711 kubelet[2865]: E0120 06:41:38.082723 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:41:38.085274 kernel: audit: type=1006 audit(1768891297.957:957): pid=6149 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 20 06:41:37.957000 audit[6149]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2605ac60 a2=3 a3=0 items=0 ppid=1 pid=6149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:38.087706 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 06:41:38.143252 kernel: audit: type=1300 audit(1768891297.957:957): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2605ac60 a2=3 a3=0 items=0 ppid=1 pid=6149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:37.957000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:38.166281 kernel: audit: type=1327 audit(1768891297.957:957): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:38.166374 kernel: audit: type=1105 audit(1768891298.101:958): pid=6149 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:38.101000 audit[6149]: USER_START pid=6149 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:38.108000 audit[6154]: CRED_ACQ pid=6154 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:38.291386 kernel: audit: type=1103 audit(1768891298.108:959): pid=6154 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:38.607791 sshd[6154]: Connection closed by 10.0.0.1 port 37186 Jan 20 06:41:38.611530 sshd-session[6149]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:38.617000 audit[6149]: USER_END pid=6149 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:38.632672 systemd[1]: sshd@28-10.0.0.35:22-10.0.0.1:37186.service: Deactivated successfully. Jan 20 06:41:38.640557 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 06:41:38.645677 systemd-logind[1623]: Session 30 logged out. Waiting for processes to exit. Jan 20 06:41:38.652603 systemd-logind[1623]: Removed session 30. Jan 20 06:41:38.691459 kernel: audit: type=1106 audit(1768891298.617:960): pid=6149 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:38.623000 audit[6149]: CRED_DISP pid=6149 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:38.748484 kernel: audit: type=1104 audit(1768891298.623:961): pid=6149 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:38.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.35:22-10.0.0.1:37186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:39.083813 kubelet[2865]: E0120 06:41:39.083496 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:41:39.090423 kubelet[2865]: E0120 06:41:39.089622 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:41:40.659000 audit[6169]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=6169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:41:40.659000 audit[6169]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd08387cf0 a2=0 a3=7ffd08387cdc items=0 ppid=2978 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:40.659000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:41:40.686000 audit[6169]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=6169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:41:40.686000 audit[6169]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd08387cf0 a2=0 a3=7ffd08387cdc items=0 ppid=2978 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:40.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:41:41.082424 kubelet[2865]: E0120 06:41:41.081757 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:41:42.073316 kubelet[2865]: E0120 06:41:42.073013 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:41:43.633513 systemd[1]: Started sshd@29-10.0.0.35:22-10.0.0.1:37192.service - OpenSSH per-connection server daemon (10.0.0.1:37192). Jan 20 06:41:43.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.35:22-10.0.0.1:37192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:43.647304 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 06:41:43.647449 kernel: audit: type=1130 audit(1768891303.632:965): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.35:22-10.0.0.1:37192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:43.828000 audit[6172]: USER_ACCT pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:43.831742 sshd[6172]: Accepted publickey for core from 10.0.0.1 port 37192 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:41:43.840801 sshd-session[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:43.880264 kernel: audit: type=1101 audit(1768891303.828:966): pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:43.834000 audit[6172]: CRED_ACQ pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:43.890324 systemd-logind[1623]: New session 31 of user core. Jan 20 06:41:43.925264 kernel: audit: type=1103 audit(1768891303.834:967): pid=6172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:43.834000 audit[6172]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbfbb8b30 a2=3 a3=0 items=0 ppid=1 pid=6172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:43.996821 kernel: audit: type=1006 audit(1768891303.834:968): pid=6172 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 20 06:41:43.997322 kernel: audit: type=1300 audit(1768891303.834:968): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbfbb8b30 a2=3 a3=0 items=0 ppid=1 pid=6172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:43.997358 kernel: audit: type=1327 audit(1768891303.834:968): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:43.834000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:44.018767 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 06:41:44.035000 audit[6172]: USER_START pid=6172 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:44.110359 kernel: audit: type=1105 audit(1768891304.035:969): pid=6172 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:44.044000 audit[6180]: CRED_ACQ pid=6180 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:44.166681 kernel: audit: type=1103 audit(1768891304.044:970): pid=6180 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:44.589550 sshd[6180]: Connection closed by 10.0.0.1 port 37192 Jan 20 06:41:44.591698 sshd-session[6172]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:44.599000 audit[6172]: USER_END pid=6172 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:44.608808 systemd[1]: sshd@29-10.0.0.35:22-10.0.0.1:37192.service: Deactivated successfully. Jan 20 06:41:44.617373 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 06:41:44.624658 systemd-logind[1623]: Session 31 logged out. Waiting for processes to exit. Jan 20 06:41:44.627588 systemd-logind[1623]: Removed session 31. Jan 20 06:41:44.600000 audit[6172]: CRED_DISP pid=6172 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:44.710701 kernel: audit: type=1106 audit(1768891304.599:971): pid=6172 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:44.711464 kernel: audit: type=1104 audit(1768891304.600:972): pid=6172 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:44.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.35:22-10.0.0.1:37192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:48.083594 kubelet[2865]: E0120 06:41:48.083360 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54fdff59b4-bvgmz" podUID="1fb741a2-9573-41fd-9b50-18c9b4a4a79a" Jan 20 06:41:49.082021 kubelet[2865]: E0120 06:41:49.081647 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-5v8sm" podUID="8605c7f4-dda9-48f9-8faf-f356da42c13a" Jan 20 06:41:49.085235 kubelet[2865]: E0120 06:41:49.083939 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8db8dd5b-nqfrx" podUID="fdd5baaa-865a-43eb-a3a6-626c707ee467" Jan 20 06:41:49.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.35:22-10.0.0.1:53550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:49.618367 systemd[1]: Started sshd@30-10.0.0.35:22-10.0.0.1:53550.service - OpenSSH per-connection server daemon (10.0.0.1:53550). Jan 20 06:41:49.630923 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:41:49.630995 kernel: audit: type=1130 audit(1768891309.617:974): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.35:22-10.0.0.1:53550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:49.813330 sshd[6196]: Accepted publickey for core from 10.0.0.1 port 53550 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:41:49.812000 audit[6196]: USER_ACCT pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:49.857412 kernel: audit: type=1101 audit(1768891309.812:975): pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:49.864342 sshd-session[6196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:49.859000 audit[6196]: CRED_ACQ pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:49.914386 kernel: audit: type=1103 audit(1768891309.859:976): pid=6196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:49.926421 systemd-logind[1623]: New session 32 of user core. Jan 20 06:41:49.860000 audit[6196]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0b2ca9e0 a2=3 a3=0 items=0 ppid=1 pid=6196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:50.009477 kernel: audit: type=1006 audit(1768891309.860:977): pid=6196 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 20 06:41:50.009595 kernel: audit: type=1300 audit(1768891309.860:977): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0b2ca9e0 a2=3 a3=0 items=0 ppid=1 pid=6196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:50.010699 kernel: audit: type=1327 audit(1768891309.860:977): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:49.860000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:50.012982 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 06:41:50.023000 audit[6196]: USER_START pid=6196 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:50.096490 kernel: audit: type=1105 audit(1768891310.023:978): pid=6196 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:50.029000 audit[6200]: CRED_ACQ pid=6200 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:50.148278 kernel: audit: type=1103 audit(1768891310.029:979): pid=6200 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:50.339539 sshd[6200]: Connection closed by 10.0.0.1 port 53550 Jan 20 06:41:50.341454 sshd-session[6196]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:50.345000 audit[6196]: USER_END pid=6196 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:50.352012 systemd[1]: sshd@30-10.0.0.35:22-10.0.0.1:53550.service: Deactivated successfully. Jan 20 06:41:50.358552 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 06:41:50.364947 systemd-logind[1623]: Session 32 logged out. Waiting for processes to exit. Jan 20 06:41:50.369594 systemd-logind[1623]: Removed session 32. Jan 20 06:41:50.412668 kernel: audit: type=1106 audit(1768891310.345:980): pid=6196 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:50.413642 kernel: audit: type=1104 audit(1768891310.345:981): pid=6196 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:50.345000 audit[6196]: CRED_DISP pid=6196 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:50.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.35:22-10.0.0.1:53550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:51.104742 kubelet[2865]: E0120 06:41:51.104480 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-grqpc" podUID="1d1bd19b-efe8-47e1-8a7a-7256f246c0d1" Jan 20 06:41:52.078526 kubelet[2865]: E0120 06:41:52.076535 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kp869" podUID="67f738e9-ce9e-42e1-a454-66084ff2d3ad" Jan 20 06:41:53.080594 kubelet[2865]: E0120 06:41:53.080247 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5bb7ff584c-brrnn" podUID="1b97c41d-4ead-4c93-97f0-70532331e2e7" Jan 20 06:41:55.083661 kubelet[2865]: E0120 06:41:55.083424 2865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7688649cc6-vz554" podUID="85a3d7fc-92d2-477e-a3c6-cf998fc60fae" Jan 20 06:41:55.368686 systemd[1]: Started sshd@31-10.0.0.35:22-10.0.0.1:56046.service - OpenSSH per-connection server daemon (10.0.0.1:56046). Jan 20 06:41:55.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.35:22-10.0.0.1:56046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:55.392296 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:41:55.392392 kernel: audit: type=1130 audit(1768891315.369:983): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.35:22-10.0.0.1:56046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:41:55.630000 audit[6245]: USER_ACCT pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:55.633726 sshd[6245]: Accepted publickey for core from 10.0.0.1 port 56046 ssh2: RSA SHA256:DeJ8htbwqOEaFlEllbpgzB0mmaeGe6BFQy6fUvLNOuM Jan 20 06:41:55.637570 sshd-session[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:41:55.670625 systemd-logind[1623]: New session 33 of user core. Jan 20 06:41:55.680296 kernel: audit: type=1101 audit(1768891315.630:984): pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:55.633000 audit[6245]: CRED_ACQ pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:55.731302 kernel: audit: type=1103 audit(1768891315.633:985): pid=6245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:55.731424 kernel: audit: type=1006 audit(1768891315.633:986): pid=6245 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 20 06:41:55.633000 audit[6245]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc67282bc0 a2=3 a3=0 items=0 ppid=1 pid=6245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:55.816528 kernel: audit: type=1300 audit(1768891315.633:986): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc67282bc0 a2=3 a3=0 items=0 ppid=1 pid=6245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:41:55.633000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:55.821409 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 06:41:55.835547 kernel: audit: type=1327 audit(1768891315.633:986): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:41:55.841000 audit[6245]: USER_START pid=6245 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:55.905637 kernel: audit: type=1105 audit(1768891315.841:987): pid=6245 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:55.956460 kernel: audit: type=1103 audit(1768891315.848:988): pid=6249 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:55.848000 audit[6249]: CRED_ACQ pid=6249 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:56.077927 kubelet[2865]: E0120 06:41:56.076952 2865 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 06:41:56.173358 sshd[6249]: Connection closed by 10.0.0.1 port 56046 Jan 20 06:41:56.173594 sshd-session[6245]: pam_unix(sshd:session): session closed for user core Jan 20 06:41:56.178000 audit[6245]: USER_END pid=6245 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:56.184644 systemd-logind[1623]: Session 33 logged out. Waiting for processes to exit. Jan 20 06:41:56.188526 systemd[1]: sshd@31-10.0.0.35:22-10.0.0.1:56046.service: Deactivated successfully. Jan 20 06:41:56.196003 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 06:41:56.201356 systemd-logind[1623]: Removed session 33. Jan 20 06:41:56.178000 audit[6245]: CRED_DISP pid=6245 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:56.302918 kernel: audit: type=1106 audit(1768891316.178:989): pid=6245 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:56.303781 kernel: audit: type=1104 audit(1768891316.178:990): pid=6245 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 06:41:56.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.35:22-10.0.0.1:56046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'