Jan 15 05:46:41.998805 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 15 03:08:43 -00 2026 Jan 15 05:46:41.998929 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=887fe536bc7dee8d2b53c9de10cc8ce6b9ee17760dbc66777e9125cc88a34922 Jan 15 05:46:41.998942 kernel: BIOS-provided physical RAM map: Jan 15 05:46:41.998977 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 15 05:46:41.998987 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 15 05:46:41.998996 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 15 05:46:41.999008 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 15 05:46:41.999018 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 15 05:46:41.999046 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 15 05:46:41.999056 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 15 05:46:41.999067 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 15 05:46:41.999098 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 15 05:46:41.999108 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 15 05:46:41.999119 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 15 05:46:41.999132 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 15 05:46:41.999143 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 15 05:46:41.999193 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 15 05:46:41.999204 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 15 05:46:41.999215 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 15 05:46:41.999225 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 15 05:46:41.999237 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 15 05:46:41.999248 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 15 05:46:41.999260 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 15 05:46:41.999269 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 05:46:41.999279 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 15 05:46:41.999289 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 15 05:46:41.999360 kernel: NX (Execute Disable) protection: active Jan 15 05:46:41.999372 kernel: APIC: Static calls initialized Jan 15 05:46:41.999384 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 15 05:46:41.999395 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 15 05:46:41.999406 kernel: extended physical RAM map: Jan 15 05:46:41.999418 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 15 05:46:41.999428 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 15 05:46:41.999438 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 15 05:46:41.999450 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 15 05:46:41.999492 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 15 05:46:41.999504 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 15 05:46:41.999548 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 15 05:46:41.999559 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 15 05:46:41.999570 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 15 05:46:41.999606 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 15 05:46:41.999645 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 15 05:46:41.999656 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 15 05:46:41.999668 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 15 05:46:41.999680 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 15 05:46:41.999692 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 15 05:46:41.999704 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 15 05:46:41.999714 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 15 05:46:41.999726 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 15 05:46:41.999739 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 15 05:46:41.999786 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 15 05:46:41.999798 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 15 05:46:41.999809 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 15 05:46:41.999820 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 15 05:46:41.999831 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 15 05:46:41.999843 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 15 05:46:41.999855 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 15 05:46:41.999866 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 15 05:46:41.999900 kernel: efi: EFI v2.7 by EDK II Jan 15 05:46:41.999912 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 15 05:46:41.999941 kernel: random: crng init done Jan 15 05:46:41.999982 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 15 05:46:42.000010 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 15 05:46:42.000022 kernel: secureboot: Secure boot disabled Jan 15 05:46:42.000034 kernel: SMBIOS 2.8 present. Jan 15 05:46:42.000045 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 15 05:46:42.000057 kernel: DMI: Memory slots populated: 1/1 Jan 15 05:46:42.000068 kernel: Hypervisor detected: KVM Jan 15 05:46:42.000080 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 15 05:46:42.000092 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 15 05:46:42.000103 kernel: kvm-clock: using sched offset of 12000875723 cycles Jan 15 05:46:42.000116 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 15 05:46:42.000158 kernel: tsc: Detected 2445.426 MHz processor Jan 15 05:46:42.000170 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 15 05:46:42.000183 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 15 05:46:42.000195 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 15 05:46:42.000207 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 15 05:46:42.000218 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 15 05:46:42.000230 kernel: Using GB pages for direct mapping Jan 15 05:46:42.000283 kernel: ACPI: Early table checksum verification disabled Jan 15 05:46:42.000298 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 15 05:46:42.000359 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 15 05:46:42.000374 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:46:42.000387 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:46:42.000399 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 15 05:46:42.000413 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:46:42.000487 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:46:42.000499 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:46:42.000512 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 15 05:46:42.000524 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 15 05:46:42.000535 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 15 05:46:42.000547 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 15 05:46:42.000558 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 15 05:46:42.000600 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 15 05:46:42.000612 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 15 05:46:42.000624 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 15 05:46:42.000635 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 15 05:46:42.000647 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 15 05:46:42.000659 kernel: No NUMA configuration found Jan 15 05:46:42.000670 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 15 05:46:42.000681 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 15 05:46:42.000724 kernel: Zone ranges: Jan 15 05:46:42.000736 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 15 05:46:42.000750 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 15 05:46:42.000760 kernel: Normal empty Jan 15 05:46:42.000771 kernel: Device empty Jan 15 05:46:42.000784 kernel: Movable zone start for each node Jan 15 05:46:42.000796 kernel: Early memory node ranges Jan 15 05:46:42.000806 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 15 05:46:42.000872 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 15 05:46:42.000884 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 15 05:46:42.000895 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 15 05:46:42.000906 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 15 05:46:42.000916 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 15 05:46:42.000928 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 15 05:46:42.000939 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 15 05:46:42.000973 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 15 05:46:42.001021 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 05:46:42.001110 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 15 05:46:42.001148 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 15 05:46:42.001159 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 15 05:46:42.001170 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 15 05:46:42.001181 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 15 05:46:42.001192 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 15 05:46:42.001204 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 15 05:46:42.001216 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 15 05:46:42.001258 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 15 05:46:42.001270 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 15 05:46:42.001281 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 15 05:46:42.001292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 15 05:46:42.001370 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 15 05:46:42.001384 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 15 05:46:42.001397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 15 05:46:42.001409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 15 05:46:42.001423 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 15 05:46:42.001434 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 15 05:46:42.001445 kernel: TSC deadline timer available Jan 15 05:46:42.001512 kernel: CPU topo: Max. logical packages: 1 Jan 15 05:46:42.001524 kernel: CPU topo: Max. logical dies: 1 Jan 15 05:46:42.001535 kernel: CPU topo: Max. dies per package: 1 Jan 15 05:46:42.001546 kernel: CPU topo: Max. threads per core: 1 Jan 15 05:46:42.001558 kernel: CPU topo: Num. cores per package: 4 Jan 15 05:46:42.001569 kernel: CPU topo: Num. threads per package: 4 Jan 15 05:46:42.001580 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 15 05:46:42.001629 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 15 05:46:42.001674 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 15 05:46:42.001687 kernel: kvm-guest: setup PV sched yield Jan 15 05:46:42.001700 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 15 05:46:42.001711 kernel: Booting paravirtualized kernel on KVM Jan 15 05:46:42.001724 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 15 05:46:42.001739 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 15 05:46:42.001752 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 15 05:46:42.001800 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 15 05:46:42.001813 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 15 05:46:42.001825 kernel: kvm-guest: PV spinlocks enabled Jan 15 05:46:42.001837 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 15 05:46:42.001870 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=887fe536bc7dee8d2b53c9de10cc8ce6b9ee17760dbc66777e9125cc88a34922 Jan 15 05:46:42.001883 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 05:46:42.001931 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 05:46:42.001946 kernel: Fallback order for Node 0: 0 Jan 15 05:46:42.001957 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 15 05:46:42.001969 kernel: Policy zone: DMA32 Jan 15 05:46:42.001983 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 05:46:42.001995 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 15 05:46:42.002006 kernel: ftrace: allocating 40128 entries in 157 pages Jan 15 05:46:42.002051 kernel: ftrace: allocated 157 pages with 5 groups Jan 15 05:46:42.002063 kernel: Dynamic Preempt: voluntary Jan 15 05:46:42.002075 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 05:46:42.002087 kernel: rcu: RCU event tracing is enabled. Jan 15 05:46:42.002098 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 15 05:46:42.002110 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 05:46:42.002122 kernel: Rude variant of Tasks RCU enabled. Jan 15 05:46:42.002133 kernel: Tracing variant of Tasks RCU enabled. Jan 15 05:46:42.002177 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 05:46:42.002190 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 15 05:46:42.002223 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 05:46:42.002236 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 05:46:42.002247 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 15 05:46:42.002258 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 15 05:46:42.002270 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 05:46:42.002387 kernel: Console: colour dummy device 80x25 Jan 15 05:46:42.002402 kernel: printk: legacy console [ttyS0] enabled Jan 15 05:46:42.002442 kernel: ACPI: Core revision 20240827 Jan 15 05:46:42.002456 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 15 05:46:42.002503 kernel: APIC: Switch to symmetric I/O mode setup Jan 15 05:46:42.002517 kernel: x2apic enabled Jan 15 05:46:42.002530 kernel: APIC: Switched APIC routing to: physical x2apic Jan 15 05:46:42.002583 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 15 05:46:42.002597 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 15 05:46:42.002611 kernel: kvm-guest: setup PV IPIs Jan 15 05:46:42.002622 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 15 05:46:42.002635 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 15 05:46:42.002648 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 15 05:46:42.002661 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 15 05:46:42.002710 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 15 05:46:42.002725 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 15 05:46:42.002736 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 15 05:46:42.002748 kernel: Spectre V2 : Mitigation: Retpolines Jan 15 05:46:42.002763 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 15 05:46:42.002774 kernel: Speculative Store Bypass: Vulnerable Jan 15 05:46:42.002788 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 15 05:46:42.002835 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 15 05:46:42.002870 kernel: active return thunk: srso_alias_return_thunk Jan 15 05:46:42.002885 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 15 05:46:42.002896 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 15 05:46:42.002910 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 15 05:46:42.002923 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 15 05:46:42.002936 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 15 05:46:42.002984 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 15 05:46:42.002998 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 15 05:46:42.003010 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 15 05:46:42.003022 kernel: Freeing SMP alternatives memory: 32K Jan 15 05:46:42.003035 kernel: pid_max: default: 32768 minimum: 301 Jan 15 05:46:42.003048 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 15 05:46:42.003061 kernel: landlock: Up and running. Jan 15 05:46:42.003110 kernel: SELinux: Initializing. Jan 15 05:46:42.003122 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 05:46:42.003134 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 05:46:42.003148 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 15 05:46:42.003161 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 15 05:46:42.003174 kernel: signal: max sigframe size: 1776 Jan 15 05:46:42.003185 kernel: rcu: Hierarchical SRCU implementation. Jan 15 05:46:42.003235 kernel: rcu: Max phase no-delay instances is 400. Jan 15 05:46:42.003248 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 15 05:46:42.003262 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 15 05:46:42.003275 kernel: smp: Bringing up secondary CPUs ... Jan 15 05:46:42.003287 kernel: smpboot: x86: Booting SMP configuration: Jan 15 05:46:42.003298 kernel: .... node #0, CPUs: #1 #2 #3 Jan 15 05:46:42.003367 kernel: smp: Brought up 1 node, 4 CPUs Jan 15 05:46:42.003416 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 15 05:46:42.003431 kernel: Memory: 2439044K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120816K reserved, 0K cma-reserved) Jan 15 05:46:42.003443 kernel: devtmpfs: initialized Jan 15 05:46:42.003456 kernel: x86/mm: Memory block size: 128MB Jan 15 05:46:42.003499 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 15 05:46:42.003512 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 15 05:46:42.003523 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 15 05:46:42.003564 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 15 05:46:42.003576 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 15 05:46:42.003588 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 15 05:46:42.003600 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 05:46:42.003612 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 15 05:46:42.003623 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 05:46:42.003635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 05:46:42.003669 kernel: audit: initializing netlink subsys (disabled) Jan 15 05:46:42.003681 kernel: audit: type=2000 audit(1768455994.486:1): state=initialized audit_enabled=0 res=1 Jan 15 05:46:42.003692 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 05:46:42.003703 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 15 05:46:42.003715 kernel: cpuidle: using governor menu Jan 15 05:46:42.003726 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 05:46:42.003740 kernel: dca service started, version 1.12.1 Jan 15 05:46:42.003781 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 15 05:46:42.003793 kernel: PCI: Using configuration type 1 for base access Jan 15 05:46:42.003804 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 15 05:46:42.003816 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 05:46:42.003828 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 05:46:42.003839 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 05:46:42.003851 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 05:46:42.003884 kernel: ACPI: Added _OSI(Module Device) Jan 15 05:46:42.003913 kernel: ACPI: Added _OSI(Processor Device) Jan 15 05:46:42.003926 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 05:46:42.003938 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 05:46:42.003949 kernel: ACPI: Interpreter enabled Jan 15 05:46:42.003960 kernel: ACPI: PM: (supports S0 S3 S5) Jan 15 05:46:42.003972 kernel: ACPI: Using IOAPIC for interrupt routing Jan 15 05:46:42.004006 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 15 05:46:42.004018 kernel: PCI: Using E820 reservations for host bridge windows Jan 15 05:46:42.004029 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 15 05:46:42.004041 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 15 05:46:42.004516 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 15 05:46:42.004808 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 15 05:46:42.005133 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 15 05:46:42.005151 kernel: PCI host bridge to bus 0000:00 Jan 15 05:46:42.005570 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 15 05:46:42.005842 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 15 05:46:42.006112 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 15 05:46:42.006409 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 15 05:46:42.006752 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 15 05:46:42.007004 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 15 05:46:42.007265 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 15 05:46:42.007667 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 15 05:46:42.007960 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 15 05:46:42.008402 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 15 05:46:42.008758 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 15 05:46:42.009054 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 15 05:46:42.009411 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 15 05:46:42.009770 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 15 05:46:42.010069 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 15 05:46:42.010506 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 15 05:46:42.010816 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 15 05:46:42.011132 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 15 05:46:42.011521 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 15 05:46:42.011823 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 15 05:46:42.012166 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 15 05:46:42.012607 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 15 05:46:42.012920 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 15 05:46:42.013217 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 15 05:46:42.013617 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 15 05:46:42.013918 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 15 05:46:42.014294 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 15 05:46:42.014684 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 15 05:46:42.014996 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 15 05:46:42.015442 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 15 05:46:42.015784 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 15 05:46:42.016089 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 15 05:46:42.016512 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 15 05:46:42.016533 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 15 05:46:42.016547 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 15 05:46:42.016561 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 15 05:46:42.016572 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 15 05:46:42.016585 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 15 05:46:42.016630 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 15 05:46:42.016644 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 15 05:46:42.016657 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 15 05:46:42.016670 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 15 05:46:42.016682 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 15 05:46:42.016694 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 15 05:46:42.016707 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 15 05:46:42.016748 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 15 05:46:42.016763 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 15 05:46:42.016774 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 15 05:46:42.016788 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 15 05:46:42.016802 kernel: iommu: Default domain type: Translated Jan 15 05:46:42.016814 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 15 05:46:42.016827 kernel: efivars: Registered efivars operations Jan 15 05:46:42.016840 kernel: PCI: Using ACPI for IRQ routing Jan 15 05:46:42.016889 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 15 05:46:42.016902 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 15 05:46:42.016914 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 15 05:46:42.016927 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 15 05:46:42.016938 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 15 05:46:42.016951 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 15 05:46:42.016964 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 15 05:46:42.017007 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 15 05:46:42.017019 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 15 05:46:42.017381 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 15 05:46:42.017731 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 15 05:46:42.018041 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 15 05:46:42.018062 kernel: vgaarb: loaded Jan 15 05:46:42.018123 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 15 05:46:42.018136 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 15 05:46:42.018150 kernel: clocksource: Switched to clocksource kvm-clock Jan 15 05:46:42.018163 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 05:46:42.018175 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 05:46:42.018189 kernel: pnp: PnP ACPI init Jan 15 05:46:42.018604 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 15 05:46:42.018666 kernel: pnp: PnP ACPI: found 6 devices Jan 15 05:46:42.018680 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 15 05:46:42.018693 kernel: NET: Registered PF_INET protocol family Jan 15 05:46:42.018706 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 05:46:42.018720 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 05:46:42.018734 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 05:46:42.018749 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 05:46:42.018950 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 05:46:42.018992 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 05:46:42.019007 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 05:46:42.019048 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 05:46:42.019062 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 05:46:42.019075 kernel: NET: Registered PF_XDP protocol family Jan 15 05:46:42.019433 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 15 05:46:42.019793 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 15 05:46:42.020066 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 15 05:46:42.020388 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 15 05:46:42.020688 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 15 05:46:42.020955 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 15 05:46:42.021210 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 15 05:46:42.021631 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 15 05:46:42.021652 kernel: PCI: CLS 0 bytes, default 64 Jan 15 05:46:42.021665 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 15 05:46:42.021678 kernel: Initialise system trusted keyrings Jan 15 05:46:42.021691 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 05:46:42.021703 kernel: Key type asymmetric registered Jan 15 05:46:42.021714 kernel: Asymmetric key parser 'x509' registered Jan 15 05:46:42.021772 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 15 05:46:42.021787 kernel: io scheduler mq-deadline registered Jan 15 05:46:42.021801 kernel: io scheduler kyber registered Jan 15 05:46:42.021813 kernel: io scheduler bfq registered Jan 15 05:46:42.021827 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 15 05:46:42.021841 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 15 05:46:42.021855 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 15 05:46:42.021903 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 15 05:46:42.021943 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 05:46:42.021957 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 15 05:46:42.021971 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 15 05:46:42.022016 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 15 05:46:42.022029 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 15 05:46:42.022386 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 15 05:46:42.022409 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 15 05:46:42.022726 kernel: rtc_cmos 00:04: registered as rtc0 Jan 15 05:46:42.023019 kernel: rtc_cmos 00:04: setting system clock to 2026-01-15T05:46:37 UTC (1768455997) Jan 15 05:46:42.023235 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 15 05:46:42.023286 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 15 05:46:42.023295 kernel: efifb: probing for efifb Jan 15 05:46:42.023303 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 15 05:46:42.023351 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 15 05:46:42.023360 kernel: efifb: scrolling: redraw Jan 15 05:46:42.023368 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 15 05:46:42.023377 kernel: Console: switching to colour frame buffer device 160x50 Jan 15 05:46:42.023407 kernel: fb0: EFI VGA frame buffer device Jan 15 05:46:42.023415 kernel: pstore: Using crash dump compression: deflate Jan 15 05:46:42.023423 kernel: pstore: Registered efi_pstore as persistent store backend Jan 15 05:46:42.023431 kernel: NET: Registered PF_INET6 protocol family Jan 15 05:46:42.023440 kernel: Segment Routing with IPv6 Jan 15 05:46:42.023448 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 05:46:42.023455 kernel: NET: Registered PF_PACKET protocol family Jan 15 05:46:42.023497 kernel: Key type dns_resolver registered Jan 15 05:46:42.023505 kernel: IPI shorthand broadcast: enabled Jan 15 05:46:42.023514 kernel: sched_clock: Marking stable (4965035816, 627230342)->(5770443333, -178177175) Jan 15 05:46:42.023540 kernel: registered taskstats version 1 Jan 15 05:46:42.023548 kernel: Loading compiled-in X.509 certificates Jan 15 05:46:42.023557 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: a89cae614c389520e311ccbffccefdc95226b716' Jan 15 05:46:42.023565 kernel: Demotion targets for Node 0: null Jan 15 05:46:42.023590 kernel: Key type .fscrypt registered Jan 15 05:46:42.023598 kernel: Key type fscrypt-provisioning registered Jan 15 05:46:42.023607 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 05:46:42.023615 kernel: ima: Allocated hash algorithm: sha1 Jan 15 05:46:42.023623 kernel: ima: No architecture policies found Jan 15 05:46:42.023631 kernel: clk: Disabling unused clocks Jan 15 05:46:42.023639 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 15 05:46:42.023663 kernel: Write protecting the kernel read-only data: 47104k Jan 15 05:46:42.023672 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 15 05:46:42.023680 kernel: Run /init as init process Jan 15 05:46:42.023688 kernel: with arguments: Jan 15 05:46:42.023696 kernel: /init Jan 15 05:46:42.023704 kernel: with environment: Jan 15 05:46:42.023711 kernel: HOME=/ Jan 15 05:46:42.023740 kernel: TERM=linux Jan 15 05:46:42.023755 kernel: SCSI subsystem initialized Jan 15 05:46:42.023768 kernel: libata version 3.00 loaded. Jan 15 05:46:42.024073 kernel: ahci 0000:00:1f.2: version 3.0 Jan 15 05:46:42.024096 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 15 05:46:42.024404 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 15 05:46:42.024657 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 15 05:46:42.024958 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 15 05:46:42.025253 kernel: scsi host0: ahci Jan 15 05:46:42.025559 kernel: scsi host1: ahci Jan 15 05:46:42.025816 kernel: scsi host2: ahci Jan 15 05:46:42.026117 kernel: scsi host3: ahci Jan 15 05:46:42.026408 kernel: scsi host4: ahci Jan 15 05:46:42.026712 kernel: scsi host5: ahci Jan 15 05:46:42.026725 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 15 05:46:42.026739 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 15 05:46:42.026754 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 15 05:46:42.026768 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 15 05:46:42.026779 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 15 05:46:42.026829 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 15 05:46:42.026845 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 15 05:46:42.026859 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 15 05:46:42.026867 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 15 05:46:42.026875 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 15 05:46:42.026884 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 15 05:46:42.026892 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 15 05:46:42.026924 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 05:46:42.026932 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 15 05:46:42.026944 kernel: ata3.00: applying bridge limits Jan 15 05:46:42.026958 kernel: ata3.00: LPM support broken, forcing max_power Jan 15 05:46:42.026970 kernel: ata3.00: configured for UDMA/100 Jan 15 05:46:42.027394 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 15 05:46:42.027753 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 15 05:46:42.028116 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 15 05:46:42.028139 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 05:46:42.028155 kernel: GPT:16515071 != 27000831 Jan 15 05:46:42.028169 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 05:46:42.028180 kernel: GPT:16515071 != 27000831 Jan 15 05:46:42.028193 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 05:46:42.028251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 15 05:46:42.028573 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 15 05:46:42.028614 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 15 05:46:42.028906 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 15 05:46:42.028928 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 05:46:42.028942 kernel: device-mapper: uevent: version 1.0.3 Jan 15 05:46:42.028956 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 15 05:46:42.029014 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 15 05:46:42.029028 kernel: raid6: avx2x4 gen() 15631 MB/s Jan 15 05:46:42.029042 kernel: raid6: avx2x2 gen() 18363 MB/s Jan 15 05:46:42.029055 kernel: raid6: avx2x1 gen() 5815 MB/s Jan 15 05:46:42.029068 kernel: raid6: using algorithm avx2x2 gen() 18363 MB/s Jan 15 05:46:42.029081 kernel: raid6: .... xor() 7530 MB/s, rmw enabled Jan 15 05:46:42.029095 kernel: raid6: using avx2x2 recovery algorithm Jan 15 05:46:42.029153 kernel: xor: automatically using best checksumming function avx Jan 15 05:46:42.029167 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 05:46:42.029181 kernel: BTRFS: device fsid 0b6e2cdd-9800-410c-b18c-88de6acfe8db devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (183) Jan 15 05:46:42.029194 kernel: BTRFS info (device dm-0): first mount of filesystem 0b6e2cdd-9800-410c-b18c-88de6acfe8db Jan 15 05:46:42.029208 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:46:42.029221 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 05:46:42.029234 kernel: BTRFS info (device dm-0): enabling free space tree Jan 15 05:46:42.029286 kernel: loop: module loaded Jan 15 05:46:42.029301 kernel: loop0: detected capacity change from 0 to 100536 Jan 15 05:46:42.029371 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 05:46:42.029389 systemd[1]: Successfully made /usr/ read-only. Jan 15 05:46:42.029405 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 05:46:42.029491 systemd[1]: Detected virtualization kvm. Jan 15 05:46:42.029508 systemd[1]: Detected architecture x86-64. Jan 15 05:46:42.029522 systemd[1]: Running in initrd. Jan 15 05:46:42.029536 systemd[1]: No hostname configured, using default hostname. Jan 15 05:46:42.029550 systemd[1]: Hostname set to . Jan 15 05:46:42.029564 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 05:46:42.029578 systemd[1]: Queued start job for default target initrd.target. Jan 15 05:46:42.029635 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 05:46:42.029651 kernel: hrtimer: interrupt took 2633686 ns Jan 15 05:46:42.029666 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 05:46:42.029680 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 05:46:42.029696 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 05:46:42.029710 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 05:46:42.029764 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 05:46:42.029781 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 05:46:42.029796 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 05:46:42.029809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 05:46:42.029823 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 15 05:46:42.029837 systemd[1]: Reached target paths.target - Path Units. Jan 15 05:46:42.029891 systemd[1]: Reached target slices.target - Slice Units. Jan 15 05:46:42.029907 systemd[1]: Reached target swap.target - Swaps. Jan 15 05:46:42.029920 systemd[1]: Reached target timers.target - Timer Units. Jan 15 05:46:42.029934 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 05:46:42.029947 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 05:46:42.029960 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 05:46:42.029973 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 05:46:42.030016 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 15 05:46:42.030029 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 05:46:42.030041 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 05:46:42.030053 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 05:46:42.030066 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 05:46:42.030080 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 05:46:42.030092 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 05:46:42.030133 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 05:46:42.030146 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 05:46:42.030160 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 15 05:46:42.030173 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 05:46:42.030185 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 05:46:42.030198 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 05:46:42.030238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:46:42.030252 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 05:46:42.030265 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 05:46:42.030279 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 05:46:42.030390 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 05:46:42.030405 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 05:46:42.030419 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:46:42.030432 kernel: Bridge firewalling registered Jan 15 05:46:42.030552 systemd-journald[317]: Collecting audit messages is enabled. Jan 15 05:46:42.030622 kernel: audit: type=1130 audit(1768456001.993:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.030637 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 05:46:42.030652 kernel: audit: type=1130 audit(1768456002.007:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.030666 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 05:46:42.030680 kernel: audit: type=1130 audit(1768456002.022:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.030730 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 05:46:42.030748 systemd-journald[317]: Journal started Jan 15 05:46:42.030772 systemd-journald[317]: Runtime Journal (/run/log/journal/b991dd37fdf042239b48f923ffaea82e) is 6M, max 48M, 42M free. Jan 15 05:46:41.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:41.993009 systemd-modules-load[320]: Inserted module 'br_netfilter' Jan 15 05:46:42.053494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 05:46:42.061759 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 05:46:42.064777 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 05:46:42.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.070237 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 05:46:42.076164 kernel: audit: type=1130 audit(1768456002.065:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.090042 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 05:46:42.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.096109 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 05:46:42.117262 kernel: audit: type=1130 audit(1768456002.095:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.117433 kernel: audit: type=1130 audit(1768456002.103:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.099514 systemd-tmpfiles[340]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 15 05:46:42.117520 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 05:46:42.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.132208 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 05:46:42.157119 kernel: audit: type=1130 audit(1768456002.127:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.178986 kernel: audit: type=1130 audit(1768456002.164:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.204268 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 05:46:42.220142 kernel: audit: type=1334 audit(1768456002.210:10): prog-id=6 op=LOAD Jan 15 05:46:42.210000 audit: BPF prog-id=6 op=LOAD Jan 15 05:46:42.222012 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 05:46:42.297272 dracut-cmdline[356]: dracut-109 Jan 15 05:46:42.322296 dracut-cmdline[356]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=887fe536bc7dee8d2b53c9de10cc8ce6b9ee17760dbc66777e9125cc88a34922 Jan 15 05:46:42.466990 systemd-resolved[357]: Positive Trust Anchors: Jan 15 05:46:42.467019 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 05:46:42.467024 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 05:46:42.467052 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 05:46:42.583074 systemd-resolved[357]: Defaulting to hostname 'linux'. Jan 15 05:46:42.593109 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 05:46:42.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:42.601990 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 05:46:42.614780 kernel: audit: type=1130 audit(1768456002.601:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:43.121438 kernel: Loading iSCSI transport class v2.0-870. Jan 15 05:46:43.147529 kernel: iscsi: registered transport (tcp) Jan 15 05:46:43.199383 kernel: iscsi: registered transport (qla4xxx) Jan 15 05:46:43.199562 kernel: QLogic iSCSI HBA Driver Jan 15 05:46:43.281401 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 05:46:43.320304 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 05:46:43.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:43.337824 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 05:46:43.737619 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 05:46:43.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:43.768300 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 05:46:43.785905 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 05:46:43.847048 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 05:46:43.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:43.871000 audit: BPF prog-id=7 op=LOAD Jan 15 05:46:43.872000 audit: BPF prog-id=8 op=LOAD Jan 15 05:46:43.873562 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 05:46:43.916847 systemd-udevd[591]: Using default interface naming scheme 'v257'. Jan 15 05:46:43.939287 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 05:46:43.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:43.972224 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 05:46:44.014538 dracut-pre-trigger[649]: rd.md=0: removing MD RAID activation Jan 15 05:46:44.035822 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 05:46:44.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:44.039000 audit: BPF prog-id=9 op=LOAD Jan 15 05:46:44.040646 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 05:46:44.094798 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 05:46:44.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:44.099657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 05:46:44.129868 systemd-networkd[713]: lo: Link UP Jan 15 05:46:44.129891 systemd-networkd[713]: lo: Gained carrier Jan 15 05:46:44.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:44.130938 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 05:46:44.134412 systemd[1]: Reached target network.target - Network. Jan 15 05:46:44.270301 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 05:46:44.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:44.275480 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 05:46:44.359215 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 15 05:46:44.404605 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 15 05:46:44.419293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 05:46:44.427804 kernel: cryptd: max_cpu_qlen set to 1000 Jan 15 05:46:44.441038 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 15 05:46:44.455729 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 05:46:44.467135 kernel: AES CTR mode by8 optimization enabled Jan 15 05:46:44.462888 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 05:46:44.463060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:46:44.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:44.463754 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:46:44.506183 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 15 05:46:44.463759 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 05:46:44.517397 disk-uuid[810]: Primary Header is updated. Jan 15 05:46:44.517397 disk-uuid[810]: Secondary Entries is updated. Jan 15 05:46:44.517397 disk-uuid[810]: Secondary Header is updated. Jan 15 05:46:44.465665 systemd-networkd[713]: eth0: Link UP Jan 15 05:46:44.466041 systemd-networkd[713]: eth0: Gained carrier Jan 15 05:46:44.466057 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:46:44.477837 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:46:44.483687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:46:44.504520 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 05:46:44.586779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:46:44.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:45.085673 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 05:46:45.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:45.092270 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 05:46:45.098444 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 05:46:45.104483 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 05:46:45.111372 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 05:46:45.173674 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 05:46:45.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:45.588179 disk-uuid[820]: Warning: The kernel is still using the old partition table. Jan 15 05:46:45.588179 disk-uuid[820]: The new table will be used at the next reboot or after you Jan 15 05:46:45.588179 disk-uuid[820]: run partprobe(8) or kpartx(8) Jan 15 05:46:45.588179 disk-uuid[820]: The operation has completed successfully. Jan 15 05:46:45.609594 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 05:46:45.609833 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 05:46:45.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:45.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:45.690640 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 05:46:45.818471 systemd-networkd[713]: eth0: Gained IPv6LL Jan 15 05:46:45.928574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Jan 15 05:46:45.937600 kernel: BTRFS info (device vda6): first mount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:46:45.937927 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:46:45.959975 kernel: BTRFS info (device vda6): turning on async discard Jan 15 05:46:45.960424 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 05:46:46.012735 kernel: BTRFS info (device vda6): last unmount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:46:46.015990 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 05:46:46.020571 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 05:46:46.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:46.774121 ignition[879]: Ignition 2.24.0 Jan 15 05:46:46.774176 ignition[879]: Stage: fetch-offline Jan 15 05:46:46.774303 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 15 05:46:46.774403 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:46:46.774847 ignition[879]: parsed url from cmdline: "" Jan 15 05:46:46.774853 ignition[879]: no config URL provided Jan 15 05:46:46.775383 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 05:46:46.775402 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 15 05:46:46.775595 ignition[879]: op(1): [started] loading QEMU firmware config module Jan 15 05:46:46.775604 ignition[879]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 15 05:46:46.795117 ignition[879]: op(1): [finished] loading QEMU firmware config module Jan 15 05:46:47.207891 ignition[879]: parsing config with SHA512: 8fbb005a4d1bcb45ccfca4b4f6c774894feb6e851fc2aeb916657f11561b8f3f53f38856d468e5144c696feae363e01793aa46bbea1ebfed059cb3f42a3114df Jan 15 05:46:47.227156 unknown[879]: fetched base config from "system" Jan 15 05:46:47.227181 unknown[879]: fetched user config from "qemu" Jan 15 05:46:47.227804 ignition[879]: fetch-offline: fetch-offline passed Jan 15 05:46:47.227959 ignition[879]: Ignition finished successfully Jan 15 05:46:47.234072 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 05:46:47.269623 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 15 05:46:47.269681 kernel: audit: type=1130 audit(1768456007.241:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:47.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:47.242289 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 15 05:46:47.246091 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 05:46:47.666241 ignition[889]: Ignition 2.24.0 Jan 15 05:46:47.666268 ignition[889]: Stage: kargs Jan 15 05:46:47.666583 ignition[889]: no configs at "/usr/lib/ignition/base.d" Jan 15 05:46:47.666596 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:46:47.675668 ignition[889]: kargs: kargs passed Jan 15 05:46:47.675738 ignition[889]: Ignition finished successfully Jan 15 05:46:47.682432 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 05:46:47.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:47.688553 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 05:46:47.697380 kernel: audit: type=1130 audit(1768456007.686:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:47.775158 ignition[897]: Ignition 2.24.0 Jan 15 05:46:47.775186 ignition[897]: Stage: disks Jan 15 05:46:47.775513 ignition[897]: no configs at "/usr/lib/ignition/base.d" Jan 15 05:46:47.775560 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:46:47.781143 ignition[897]: disks: disks passed Jan 15 05:46:47.781223 ignition[897]: Ignition finished successfully Jan 15 05:46:47.797289 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 05:46:47.812291 kernel: audit: type=1130 audit(1768456007.800:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:47.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:47.801690 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 05:46:47.813733 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 05:46:47.887720 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 05:46:47.914041 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 05:46:47.915258 systemd[1]: Reached target basic.target - Basic System. Jan 15 05:46:47.924914 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 05:46:47.993790 systemd-fsck[906]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 15 05:46:47.999306 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 05:46:48.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:48.010656 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 05:46:48.021718 kernel: audit: type=1130 audit(1768456008.009:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:48.190363 kernel: EXT4-fs (vda9): mounted filesystem a9a0585b-a83b-49e4-a2e7-8f2fc277193d r/w with ordered data mode. Quota mode: none. Jan 15 05:46:48.190689 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 05:46:48.192663 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 05:46:48.199971 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 05:46:48.202136 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 05:46:48.206616 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 15 05:46:48.206703 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 05:46:48.206732 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 05:46:48.253092 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 05:46:48.256674 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 05:46:48.271903 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Jan 15 05:46:48.271927 kernel: BTRFS info (device vda6): first mount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:46:48.271965 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:46:48.278054 kernel: BTRFS info (device vda6): turning on async discard Jan 15 05:46:48.278083 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 05:46:48.279817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 05:46:48.498015 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 05:46:48.514673 kernel: audit: type=1130 audit(1768456008.500:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:48.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:48.502670 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 05:46:48.530206 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 05:46:48.541457 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 05:46:48.547410 kernel: BTRFS info (device vda6): last unmount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:46:49.286108 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1015687676 wd_nsec: 1015687224 Jan 15 05:46:49.290360 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 05:46:49.300687 kernel: audit: type=1130 audit(1768456009.292:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:49.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:49.340955 ignition[1010]: INFO : Ignition 2.24.0 Jan 15 05:46:49.340955 ignition[1010]: INFO : Stage: mount Jan 15 05:46:49.346676 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 05:46:49.346676 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:46:49.375070 ignition[1010]: INFO : mount: mount passed Jan 15 05:46:49.377078 ignition[1010]: INFO : Ignition finished successfully Jan 15 05:46:49.378396 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 05:46:49.390130 kernel: audit: type=1130 audit(1768456009.380:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:49.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:49.383452 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 05:46:49.409427 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 05:46:49.439374 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1023) Jan 15 05:46:49.445639 kernel: BTRFS info (device vda6): first mount of filesystem 481eb5ac-ea9e-4f33-83b3-51301310e9c7 Jan 15 05:46:49.445765 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 15 05:46:49.457067 kernel: BTRFS info (device vda6): turning on async discard Jan 15 05:46:49.457098 kernel: BTRFS info (device vda6): enabling free space tree Jan 15 05:46:49.459904 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 05:46:49.513682 ignition[1040]: INFO : Ignition 2.24.0 Jan 15 05:46:49.513682 ignition[1040]: INFO : Stage: files Jan 15 05:46:49.518866 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 05:46:49.518866 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:46:49.518866 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Jan 15 05:46:49.518866 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 05:46:49.518866 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 05:46:49.540374 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 05:46:49.540374 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 05:46:49.540374 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 05:46:49.522196 unknown[1040]: wrote ssh authorized keys file for user: core Jan 15 05:46:49.561248 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 15 05:46:49.561248 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 15 05:46:49.621049 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 05:46:49.744433 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 15 05:46:49.754921 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 05:46:49.754921 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 05:46:49.754921 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 05:46:49.754921 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 05:46:49.754921 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 05:46:49.754921 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 05:46:49.754921 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 05:46:49.754921 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 05:46:49.754921 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 05:46:49.754921 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 05:46:49.754921 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 15 05:46:49.813113 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 15 05:46:49.813113 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 15 05:46:49.813113 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 15 05:46:50.129445 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 05:46:51.723808 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 15 05:46:51.723808 ignition[1040]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 05:46:51.735298 ignition[1040]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 05:46:51.735298 ignition[1040]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 05:46:51.735298 ignition[1040]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 05:46:51.735298 ignition[1040]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 15 05:46:51.735298 ignition[1040]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 05:46:51.735298 ignition[1040]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 15 05:46:51.735298 ignition[1040]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 15 05:46:51.735298 ignition[1040]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 15 05:46:52.080416 ignition[1040]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 05:46:52.090630 ignition[1040]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 15 05:46:52.094630 ignition[1040]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 15 05:46:52.094630 ignition[1040]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 15 05:46:52.094630 ignition[1040]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 05:46:52.094630 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 05:46:52.094630 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 05:46:52.094630 ignition[1040]: INFO : files: files passed Jan 15 05:46:52.094630 ignition[1040]: INFO : Ignition finished successfully Jan 15 05:46:52.123448 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 05:46:52.136629 kernel: audit: type=1130 audit(1768456012.124:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.126682 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 05:46:52.155476 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 05:46:52.159652 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 05:46:52.181033 kernel: audit: type=1130 audit(1768456012.160:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.181067 kernel: audit: type=1131 audit(1768456012.160:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.159822 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 05:46:52.195193 initrd-setup-root-after-ignition[1072]: grep: /sysroot/oem/oem-release: No such file or directory Jan 15 05:46:52.202736 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 05:46:52.202736 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 05:46:52.208251 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 05:46:52.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.206255 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 05:46:52.215190 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 05:46:52.217684 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 05:46:52.317743 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 05:46:52.317923 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 05:46:52.340887 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:46:52.340921 kernel: audit: type=1130 audit(1768456012.322:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.340935 kernel: audit: type=1131 audit(1768456012.322:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.324095 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 05:46:52.341283 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 05:46:52.356865 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 05:46:52.360933 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 05:46:52.404010 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 05:46:52.415757 kernel: audit: type=1130 audit(1768456012.407:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.417212 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 05:46:52.463136 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 15 05:46:52.466973 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 05:46:52.468375 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 05:46:52.473943 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 05:46:52.479405 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 05:46:52.493236 kernel: audit: type=1131 audit(1768456012.483:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.479481 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 05:46:52.493456 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 05:46:52.499116 systemd[1]: Stopped target basic.target - Basic System. Jan 15 05:46:52.504089 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 05:46:52.509348 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 05:46:52.516632 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 05:46:52.519058 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 15 05:46:52.525348 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 05:46:52.531882 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 05:46:52.536968 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 05:46:52.542190 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 05:46:52.551137 systemd[1]: Stopped target swap.target - Swaps. Jan 15 05:46:52.569684 kernel: audit: type=1131 audit(1768456012.561:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.556671 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 05:46:52.556771 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 05:46:52.568678 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 05:46:52.570922 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 05:46:52.575818 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 05:46:52.599113 kernel: audit: type=1131 audit(1768456012.590:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.576040 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 05:46:52.607975 kernel: audit: type=1131 audit(1768456012.599:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.580986 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 05:46:52.581053 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 05:46:52.597800 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 05:46:52.597891 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 05:46:52.600453 systemd[1]: Stopped target paths.target - Path Units. Jan 15 05:46:52.613875 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 05:46:52.617490 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 05:46:52.622970 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 05:46:52.623946 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 05:46:52.634751 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 05:46:52.670411 kernel: audit: type=1131 audit(1768456012.656:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.634817 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 05:46:52.681257 kernel: audit: type=1131 audit(1768456012.671:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.640912 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 05:46:52.640957 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 05:46:52.692967 kernel: audit: type=1131 audit(1768456012.684:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.642997 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 15 05:46:52.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.643037 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 15 05:46:52.655149 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 05:46:52.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.655219 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 05:46:52.657154 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 05:46:52.657212 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 05:46:52.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.674479 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 05:46:52.683808 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 05:46:52.683874 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 05:46:52.686266 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 05:46:52.692965 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 05:46:52.693027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 05:46:52.693834 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 05:46:52.693887 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 05:46:52.694397 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 05:46:52.694449 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 05:46:52.696736 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 05:46:52.696871 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 05:46:52.798066 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 05:46:52.811691 ignition[1098]: INFO : Ignition 2.24.0 Jan 15 05:46:52.811691 ignition[1098]: INFO : Stage: umount Jan 15 05:46:52.816606 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 05:46:52.816606 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 15 05:46:52.816606 ignition[1098]: INFO : umount: umount passed Jan 15 05:46:52.816606 ignition[1098]: INFO : Ignition finished successfully Jan 15 05:46:52.828453 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 05:46:52.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.828653 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 05:46:52.831424 systemd[1]: Stopped target network.target - Network. Jan 15 05:46:52.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.838389 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 05:46:52.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.838461 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 05:46:52.843974 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 05:46:52.844074 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 05:46:52.851128 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 05:46:52.851212 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 05:46:52.854484 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 05:46:52.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.854542 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 05:46:52.860292 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 05:46:52.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.868094 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 05:46:52.869786 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 05:46:52.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.869917 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 05:46:52.878712 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 05:46:52.878831 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 05:46:52.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.893231 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 05:46:52.893466 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 05:46:52.903868 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 05:46:52.904051 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 05:46:52.918661 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 15 05:46:52.920060 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 05:46:52.920156 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 05:46:52.932000 audit: BPF prog-id=6 op=UNLOAD Jan 15 05:46:52.936187 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 05:46:52.937286 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 05:46:52.936000 audit: BPF prog-id=9 op=UNLOAD Jan 15 05:46:52.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.937424 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 05:46:52.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.945632 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 05:46:52.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.945711 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 05:46:52.951286 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 05:46:52.951404 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 05:46:52.958004 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 05:46:52.987831 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 05:46:52.988097 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 05:46:52.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.992188 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 05:46:52.992262 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 05:46:52.997863 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 05:46:53.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:52.997905 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 05:46:53.002897 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 05:46:53.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:53.002959 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 05:46:53.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:53.014834 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 05:46:53.014928 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 05:46:53.019495 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 05:46:53.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:53.019605 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 05:46:53.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:53.034007 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 05:46:53.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:53.038013 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 15 05:46:53.038086 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 05:46:53.044449 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 05:46:53.044558 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 05:46:53.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:53.050451 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 05:46:53.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:53.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:53.050517 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:46:53.069915 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 05:46:53.070062 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 05:46:53.076151 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 05:46:53.076292 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 05:46:53.079554 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 05:46:53.089282 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 05:46:53.120865 systemd[1]: Switching root. Jan 15 05:46:53.155500 systemd-journald[317]: Journal stopped Jan 15 05:46:55.283448 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Jan 15 05:46:55.283574 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 05:46:55.283684 kernel: SELinux: policy capability open_perms=1 Jan 15 05:46:55.283708 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 05:46:55.283726 kernel: SELinux: policy capability always_check_network=0 Jan 15 05:46:55.283755 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 05:46:55.283813 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 05:46:55.283860 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 05:46:55.283880 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 05:46:55.283925 kernel: SELinux: policy capability userspace_initial_context=0 Jan 15 05:46:55.283945 systemd[1]: Successfully loaded SELinux policy in 82.965ms. Jan 15 05:46:55.284022 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.968ms. Jan 15 05:46:55.284053 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 05:46:55.284067 systemd[1]: Detected virtualization kvm. Jan 15 05:46:55.284081 systemd[1]: Detected architecture x86-64. Jan 15 05:46:55.284095 systemd[1]: Detected first boot. Jan 15 05:46:55.284147 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 15 05:46:55.284162 zram_generator::config[1142]: No configuration found. Jan 15 05:46:55.284191 kernel: Guest personality initialized and is inactive Jan 15 05:46:55.284221 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 15 05:46:55.284235 kernel: Initialized host personality Jan 15 05:46:55.284247 kernel: NET: Registered PF_VSOCK protocol family Jan 15 05:46:55.284261 systemd[1]: Populated /etc with preset unit settings. Jan 15 05:46:55.284296 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 05:46:55.284358 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 05:46:55.284392 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 05:46:55.284412 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 05:46:55.284547 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 05:46:55.284564 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 05:46:55.284646 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 05:46:55.284676 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 05:46:55.284691 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 05:46:55.284720 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 05:46:55.284748 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 05:46:55.284762 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 05:46:55.284789 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 05:46:55.284803 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 05:46:55.284842 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 05:46:55.284856 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 05:46:55.284870 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 05:46:55.284948 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 05:46:55.284962 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 05:46:55.284976 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 05:46:55.285005 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 05:46:55.285019 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 05:46:55.285031 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 05:46:55.285066 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 05:46:55.285081 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 05:46:55.285094 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 05:46:55.285107 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 15 05:46:55.285120 systemd[1]: Reached target slices.target - Slice Units. Jan 15 05:46:55.285133 systemd[1]: Reached target swap.target - Swaps. Jan 15 05:46:55.285146 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 05:46:55.285180 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 05:46:55.285195 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 15 05:46:55.285223 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 15 05:46:55.285237 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 15 05:46:55.285253 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 05:46:55.285268 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 15 05:46:55.285282 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 15 05:46:55.285464 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 05:46:55.285492 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 05:46:55.285507 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 05:46:55.285521 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 05:46:55.285534 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 05:46:55.285547 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 05:46:55.285561 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:46:55.285636 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 05:46:55.285655 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 05:46:55.285677 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 05:46:55.285697 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 05:46:55.285756 systemd[1]: Reached target machines.target - Containers. Jan 15 05:46:55.285780 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 05:46:55.285801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 05:46:55.285866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 05:46:55.285889 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 05:46:55.285908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 05:46:55.285928 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 05:46:55.285947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 05:46:55.285961 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 05:46:55.286007 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 05:46:55.286070 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 05:46:55.286091 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 05:46:55.286111 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 05:46:55.286165 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 05:46:55.286186 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 05:46:55.286233 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 05:46:55.286254 kernel: ACPI: bus type drm_connector registered Jan 15 05:46:55.286291 kernel: fuse: init (API version 7.41) Jan 15 05:46:55.286366 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 05:46:55.286389 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 05:46:55.286410 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 05:46:55.286470 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 05:46:55.286520 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 15 05:46:55.286576 systemd-journald[1219]: Collecting audit messages is enabled. Jan 15 05:46:55.286688 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 05:46:55.286712 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:46:55.286763 systemd-journald[1219]: Journal started Jan 15 05:46:55.286796 systemd-journald[1219]: Runtime Journal (/run/log/journal/b991dd37fdf042239b48f923ffaea82e) is 6M, max 48M, 42M free. Jan 15 05:46:55.297225 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 05:46:55.297265 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 05:46:54.951000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 15 05:46:55.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.198000 audit: BPF prog-id=14 op=UNLOAD Jan 15 05:46:55.198000 audit: BPF prog-id=13 op=UNLOAD Jan 15 05:46:55.201000 audit: BPF prog-id=15 op=LOAD Jan 15 05:46:55.201000 audit: BPF prog-id=16 op=LOAD Jan 15 05:46:55.203000 audit: BPF prog-id=17 op=LOAD Jan 15 05:46:55.271000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 15 05:46:55.271000 audit[1219]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff29caada0 a2=4000 a3=0 items=0 ppid=1 pid=1219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:55.271000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 15 05:46:54.673998 systemd[1]: Queued start job for default target multi-user.target. Jan 15 05:46:54.698622 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 15 05:46:54.699514 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 05:46:54.700176 systemd[1]: systemd-journald.service: Consumed 1.307s CPU time. Jan 15 05:46:55.305036 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 05:46:55.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.310751 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 05:46:55.314041 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 05:46:55.318510 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 05:46:55.322117 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 05:46:55.325624 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 05:46:55.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.329749 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 05:46:55.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.333634 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 05:46:55.333920 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 05:46:55.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.337794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 05:46:55.338077 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 05:46:55.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.344135 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 05:46:55.344912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 05:46:55.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.349861 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 05:46:55.350205 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 05:46:55.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.355689 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 05:46:55.356017 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 05:46:55.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.359170 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 05:46:55.359484 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 05:46:55.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.362820 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 05:46:55.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.366216 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 05:46:55.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.371390 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 05:46:55.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.376201 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 15 05:46:55.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.393298 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 05:46:55.397509 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 15 05:46:55.402301 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 05:46:55.406873 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 05:46:55.411278 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 05:46:55.411449 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 05:46:55.416526 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 15 05:46:55.420934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 05:46:55.421098 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 05:46:55.426533 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 05:46:55.431586 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 05:46:55.435442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 05:46:55.436851 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 05:46:55.440251 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 05:46:55.441972 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 05:46:55.456828 systemd-journald[1219]: Time spent on flushing to /var/log/journal/b991dd37fdf042239b48f923ffaea82e is 14.862ms for 1187 entries. Jan 15 05:46:55.456828 systemd-journald[1219]: System Journal (/var/log/journal/b991dd37fdf042239b48f923ffaea82e) is 8M, max 163.5M, 155.5M free. Jan 15 05:46:55.484716 systemd-journald[1219]: Received client request to flush runtime journal. Jan 15 05:46:55.452549 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 05:46:55.502658 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 05:46:55.510498 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 05:46:55.516221 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 05:46:55.526373 kernel: loop1: detected capacity change from 0 to 229808 Jan 15 05:46:55.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.523408 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 05:46:55.528747 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 05:46:55.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.534757 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 05:46:55.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.543022 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 05:46:55.558019 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 15 05:46:55.578957 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 05:46:55.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.610412 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 05:46:55.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.616000 audit: BPF prog-id=18 op=LOAD Jan 15 05:46:55.616000 audit: BPF prog-id=19 op=LOAD Jan 15 05:46:55.616000 audit: BPF prog-id=20 op=LOAD Jan 15 05:46:55.622000 audit: BPF prog-id=21 op=LOAD Jan 15 05:46:55.618407 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 15 05:46:55.625513 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 05:46:55.629691 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 05:46:55.634000 audit: BPF prog-id=22 op=LOAD Jan 15 05:46:55.634000 audit: BPF prog-id=23 op=LOAD Jan 15 05:46:55.634000 audit: BPF prog-id=24 op=LOAD Jan 15 05:46:55.636557 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 15 05:46:55.660379 kernel: loop2: detected capacity change from 0 to 111560 Jan 15 05:46:55.657000 audit: BPF prog-id=25 op=LOAD Jan 15 05:46:55.658000 audit: BPF prog-id=26 op=LOAD Jan 15 05:46:55.658000 audit: BPF prog-id=27 op=LOAD Jan 15 05:46:55.660499 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 05:46:55.681048 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 15 05:46:55.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.701649 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 05:46:55.784509 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jan 15 05:46:55.784539 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. Jan 15 05:46:55.794396 kernel: loop3: detected capacity change from 0 to 50784 Jan 15 05:46:55.794848 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 05:46:55.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.806475 systemd-nsresourced[1281]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 15 05:46:55.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.808143 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 15 05:46:55.827426 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 05:46:55.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.847420 kernel: loop4: detected capacity change from 0 to 229808 Jan 15 05:46:55.870380 kernel: loop5: detected capacity change from 0 to 111560 Jan 15 05:46:55.890492 kernel: loop6: detected capacity change from 0 to 50784 Jan 15 05:46:55.904021 (sd-merge)[1303]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 15 05:46:55.905185 systemd-oomd[1278]: No swap; memory pressure usage will be degraded Jan 15 05:46:55.906670 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 15 05:46:55.909176 (sd-merge)[1303]: Merged extensions into '/usr'. Jan 15 05:46:55.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:55.914748 systemd[1]: Reload requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 05:46:55.914852 systemd[1]: Reloading... Jan 15 05:46:55.926101 systemd-resolved[1279]: Positive Trust Anchors: Jan 15 05:46:55.926530 systemd-resolved[1279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 05:46:55.926588 systemd-resolved[1279]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 15 05:46:55.926690 systemd-resolved[1279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 05:46:55.934031 systemd-resolved[1279]: Defaulting to hostname 'linux'. Jan 15 05:46:55.975428 zram_generator::config[1334]: No configuration found. Jan 15 05:46:56.189091 systemd[1]: Reloading finished in 273 ms. Jan 15 05:46:56.222211 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 05:46:56.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.225570 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 05:46:56.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.228941 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 05:46:56.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.236704 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 05:46:56.255386 systemd[1]: Starting ensure-sysext.service... Jan 15 05:46:56.259128 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 05:46:56.262000 audit: BPF prog-id=8 op=UNLOAD Jan 15 05:46:56.262000 audit: BPF prog-id=7 op=UNLOAD Jan 15 05:46:56.263000 audit: BPF prog-id=28 op=LOAD Jan 15 05:46:56.263000 audit: BPF prog-id=29 op=LOAD Jan 15 05:46:56.265921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 05:46:56.272000 audit: BPF prog-id=30 op=LOAD Jan 15 05:46:56.272000 audit: BPF prog-id=15 op=UNLOAD Jan 15 05:46:56.272000 audit: BPF prog-id=31 op=LOAD Jan 15 05:46:56.272000 audit: BPF prog-id=32 op=LOAD Jan 15 05:46:56.272000 audit: BPF prog-id=16 op=UNLOAD Jan 15 05:46:56.272000 audit: BPF prog-id=17 op=UNLOAD Jan 15 05:46:56.274000 audit: BPF prog-id=33 op=LOAD Jan 15 05:46:56.274000 audit: BPF prog-id=22 op=UNLOAD Jan 15 05:46:56.274000 audit: BPF prog-id=34 op=LOAD Jan 15 05:46:56.274000 audit: BPF prog-id=35 op=LOAD Jan 15 05:46:56.274000 audit: BPF prog-id=23 op=UNLOAD Jan 15 05:46:56.274000 audit: BPF prog-id=24 op=UNLOAD Jan 15 05:46:56.276000 audit: BPF prog-id=36 op=LOAD Jan 15 05:46:56.276000 audit: BPF prog-id=18 op=UNLOAD Jan 15 05:46:56.277000 audit: BPF prog-id=37 op=LOAD Jan 15 05:46:56.277000 audit: BPF prog-id=38 op=LOAD Jan 15 05:46:56.277000 audit: BPF prog-id=19 op=UNLOAD Jan 15 05:46:56.277000 audit: BPF prog-id=20 op=UNLOAD Jan 15 05:46:56.279000 audit: BPF prog-id=39 op=LOAD Jan 15 05:46:56.279000 audit: BPF prog-id=25 op=UNLOAD Jan 15 05:46:56.279000 audit: BPF prog-id=40 op=LOAD Jan 15 05:46:56.279000 audit: BPF prog-id=41 op=LOAD Jan 15 05:46:56.279000 audit: BPF prog-id=26 op=UNLOAD Jan 15 05:46:56.279000 audit: BPF prog-id=27 op=UNLOAD Jan 15 05:46:56.280000 audit: BPF prog-id=42 op=LOAD Jan 15 05:46:56.289728 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 15 05:46:56.290227 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 15 05:46:56.290672 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 05:46:56.288000 audit: BPF prog-id=21 op=UNLOAD Jan 15 05:46:56.292350 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 15 05:46:56.292452 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 15 05:46:56.294662 systemd[1]: Reload requested from client PID 1372 ('systemctl') (unit ensure-sysext.service)... Jan 15 05:46:56.294699 systemd[1]: Reloading... Jan 15 05:46:56.301693 systemd-tmpfiles[1373]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 05:46:56.301721 systemd-tmpfiles[1373]: Skipping /boot Jan 15 05:46:56.316952 systemd-udevd[1374]: Using default interface naming scheme 'v257'. Jan 15 05:46:56.321781 systemd-tmpfiles[1373]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 05:46:56.321800 systemd-tmpfiles[1373]: Skipping /boot Jan 15 05:46:56.366391 zram_generator::config[1407]: No configuration found. Jan 15 05:46:56.471387 kernel: mousedev: PS/2 mouse device common for all mice Jan 15 05:46:56.487383 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 15 05:46:56.513373 kernel: ACPI: button: Power Button [PWRF] Jan 15 05:46:56.530355 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 15 05:46:56.530832 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 15 05:46:56.534561 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 15 05:46:56.644857 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 15 05:46:56.645229 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 15 05:46:56.649046 systemd[1]: Reloading finished in 353 ms. Jan 15 05:46:56.665551 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 05:46:56.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.672000 audit: BPF prog-id=43 op=LOAD Jan 15 05:46:56.673000 audit: BPF prog-id=42 op=UNLOAD Jan 15 05:46:56.673000 audit: BPF prog-id=44 op=LOAD Jan 15 05:46:56.674000 audit: BPF prog-id=45 op=LOAD Jan 15 05:46:56.674000 audit: BPF prog-id=28 op=UNLOAD Jan 15 05:46:56.675000 audit: BPF prog-id=29 op=UNLOAD Jan 15 05:46:56.676000 audit: BPF prog-id=46 op=LOAD Jan 15 05:46:56.676000 audit: BPF prog-id=39 op=UNLOAD Jan 15 05:46:56.677000 audit: BPF prog-id=47 op=LOAD Jan 15 05:46:56.677000 audit: BPF prog-id=48 op=LOAD Jan 15 05:46:56.678000 audit: BPF prog-id=40 op=UNLOAD Jan 15 05:46:56.678000 audit: BPF prog-id=41 op=UNLOAD Jan 15 05:46:56.690000 audit: BPF prog-id=49 op=LOAD Jan 15 05:46:56.691000 audit: BPF prog-id=36 op=UNLOAD Jan 15 05:46:56.692000 audit: BPF prog-id=50 op=LOAD Jan 15 05:46:56.692000 audit: BPF prog-id=51 op=LOAD Jan 15 05:46:56.693000 audit: BPF prog-id=37 op=UNLOAD Jan 15 05:46:56.693000 audit: BPF prog-id=38 op=UNLOAD Jan 15 05:46:56.700000 audit: BPF prog-id=52 op=LOAD Jan 15 05:46:56.700000 audit: BPF prog-id=30 op=UNLOAD Jan 15 05:46:56.700000 audit: BPF prog-id=53 op=LOAD Jan 15 05:46:56.701000 audit: BPF prog-id=54 op=LOAD Jan 15 05:46:56.701000 audit: BPF prog-id=31 op=UNLOAD Jan 15 05:46:56.701000 audit: BPF prog-id=32 op=UNLOAD Jan 15 05:46:56.704000 audit: BPF prog-id=55 op=LOAD Jan 15 05:46:56.705000 audit: BPF prog-id=33 op=UNLOAD Jan 15 05:46:56.705000 audit: BPF prog-id=56 op=LOAD Jan 15 05:46:56.706000 audit: BPF prog-id=57 op=LOAD Jan 15 05:46:56.707000 audit: BPF prog-id=34 op=UNLOAD Jan 15 05:46:56.707000 audit: BPF prog-id=35 op=UNLOAD Jan 15 05:46:56.716719 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 05:46:56.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.765373 kernel: kvm_amd: TSC scaling supported Jan 15 05:46:56.765464 kernel: kvm_amd: Nested Virtualization enabled Jan 15 05:46:56.765487 kernel: kvm_amd: Nested Paging enabled Jan 15 05:46:56.765512 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 15 05:46:56.768945 kernel: kvm_amd: PMU virtualization is disabled Jan 15 05:46:56.795792 systemd[1]: Finished ensure-sysext.service. Jan 15 05:46:56.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.824429 kernel: EDAC MC: Ver: 3.0.0 Jan 15 05:46:56.832486 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:46:56.834287 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 05:46:56.838863 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 05:46:56.840267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 05:46:56.841891 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 05:46:56.857959 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 05:46:56.865529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 05:46:56.871733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 05:46:56.874861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 05:46:56.875160 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 15 05:46:56.878537 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 05:46:56.884814 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 05:46:56.888900 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 05:46:56.894802 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 05:46:56.900000 audit: BPF prog-id=58 op=LOAD Jan 15 05:46:56.902225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 05:46:56.909000 audit: BPF prog-id=59 op=LOAD Jan 15 05:46:56.911282 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 15 05:46:56.921709 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 05:46:56.929522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 05:46:56.932524 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 15 05:46:56.934949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 05:46:56.935286 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 05:46:56.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.944488 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 05:46:56.944829 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 05:46:56.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.948592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 05:46:56.948950 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 05:46:56.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.953059 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 05:46:56.956824 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 05:46:56.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.961393 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 05:46:56.963000 audit[1513]: SYSTEM_BOOT pid=1513 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.985055 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 05:46:56.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:46:56.989821 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 05:46:56.990514 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 05:46:56.991000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 15 05:46:56.991000 audit[1530]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffecf4a8280 a2=420 a3=0 items=0 ppid=1489 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:46:56.991000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 05:46:56.992158 augenrules[1530]: No rules Jan 15 05:46:56.993026 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 05:46:56.997072 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 05:46:57.002985 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 05:46:57.036736 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 05:46:57.038520 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 05:46:57.064971 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 05:46:57.092096 systemd-networkd[1507]: lo: Link UP Jan 15 05:46:57.092108 systemd-networkd[1507]: lo: Gained carrier Jan 15 05:46:57.094763 systemd-networkd[1507]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:46:57.094773 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 05:46:57.095140 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 05:46:57.096273 systemd-networkd[1507]: eth0: Link UP Jan 15 05:46:57.096837 systemd-networkd[1507]: eth0: Gained carrier Jan 15 05:46:57.096851 systemd-networkd[1507]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 15 05:46:57.099702 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 15 05:46:57.103216 systemd[1]: Reached target network.target - Network. Jan 15 05:46:57.106032 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 05:46:57.111414 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 15 05:46:57.116598 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 05:46:57.118020 systemd-networkd[1507]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 15 05:46:57.121020 systemd-timesyncd[1509]: Network configuration changed, trying to establish connection. Jan 15 05:46:58.976513 systemd-resolved[1279]: Clock change detected. Flushing caches. Jan 15 05:46:58.976540 systemd-timesyncd[1509]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 15 05:46:58.976609 systemd-timesyncd[1509]: Initial clock synchronization to Thu 2026-01-15 05:46:58.976415 UTC. Jan 15 05:46:59.001877 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 15 05:46:59.278841 ldconfig[1499]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 05:46:59.284078 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 05:46:59.289769 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 05:46:59.326948 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 05:46:59.329969 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 05:46:59.332711 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 05:46:59.335764 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 05:46:59.338779 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 15 05:46:59.341739 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 05:46:59.344457 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 05:46:59.347563 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 15 05:46:59.350721 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 15 05:46:59.353403 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 05:46:59.356435 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 05:46:59.356482 systemd[1]: Reached target paths.target - Path Units. Jan 15 05:46:59.358741 systemd[1]: Reached target timers.target - Timer Units. Jan 15 05:46:59.362503 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 05:46:59.367527 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 05:46:59.372177 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 15 05:46:59.375785 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 15 05:46:59.379022 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 15 05:46:59.391887 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 05:46:59.394852 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 15 05:46:59.398495 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 05:46:59.402143 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 05:46:59.404489 systemd[1]: Reached target basic.target - Basic System. Jan 15 05:46:59.406846 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 05:46:59.406902 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 05:46:59.408226 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 05:46:59.412237 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 05:46:59.415804 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 05:46:59.424489 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 05:46:59.428808 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 05:46:59.431340 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 05:46:59.433070 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 15 05:46:59.436696 jq[1558]: false Jan 15 05:46:59.437231 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 05:46:59.443072 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 05:46:59.447892 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 05:46:59.449407 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing passwd entry cache Jan 15 05:46:59.449433 oslogin_cache_refresh[1560]: Refreshing passwd entry cache Jan 15 05:46:59.455595 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 05:46:59.457509 extend-filesystems[1559]: Found /dev/vda6 Jan 15 05:46:59.463135 extend-filesystems[1559]: Found /dev/vda9 Jan 15 05:46:59.465496 extend-filesystems[1559]: Checking size of /dev/vda9 Jan 15 05:46:59.473255 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting users, quitting Jan 15 05:46:59.473255 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 05:46:59.473230 oslogin_cache_refresh[1560]: Failure getting users, quitting Jan 15 05:46:59.473401 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing group entry cache Jan 15 05:46:59.473260 oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 15 05:46:59.473318 oslogin_cache_refresh[1560]: Refreshing group entry cache Jan 15 05:46:59.474631 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 05:46:59.478535 extend-filesystems[1559]: Resized partition /dev/vda9 Jan 15 05:46:59.484250 extend-filesystems[1580]: resize2fs 1.47.3 (8-Jul-2025) Jan 15 05:46:59.483486 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 05:46:59.484270 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 05:46:59.488498 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting groups, quitting Jan 15 05:46:59.488498 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 05:46:59.488436 oslogin_cache_refresh[1560]: Failure getting groups, quitting Jan 15 05:46:59.488458 oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 15 05:46:59.491423 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 15 05:46:59.491443 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 05:46:59.501215 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 05:46:59.509058 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 05:46:59.519421 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 05:46:59.521912 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 05:46:59.522339 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 15 05:46:59.522767 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 15 05:46:59.526018 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 05:46:59.526325 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 05:46:59.531237 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 05:46:59.532260 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 05:46:59.548440 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 15 05:46:59.551548 jq[1585]: true Jan 15 05:46:59.578336 extend-filesystems[1580]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 15 05:46:59.578336 extend-filesystems[1580]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 15 05:46:59.578336 extend-filesystems[1580]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 15 05:46:59.603523 update_engine[1583]: I20260115 05:46:59.562165 1583 main.cc:92] Flatcar Update Engine starting Jan 15 05:46:59.603815 extend-filesystems[1559]: Resized filesystem in /dev/vda9 Jan 15 05:46:59.579964 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 05:46:59.607253 tar[1595]: linux-amd64/LICENSE Jan 15 05:46:59.607253 tar[1595]: linux-amd64/helm Jan 15 05:46:59.580389 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 05:46:59.611741 jq[1604]: true Jan 15 05:46:59.615142 dbus-daemon[1556]: [system] SELinux support is enabled Jan 15 05:46:59.615706 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 05:46:59.622818 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 05:46:59.622851 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 05:46:59.626815 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 05:46:59.626856 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 05:46:59.631327 update_engine[1583]: I20260115 05:46:59.631228 1583 update_check_scheduler.cc:74] Next update check in 11m34s Jan 15 05:46:59.632202 systemd[1]: Started update-engine.service - Update Engine. Jan 15 05:46:59.638868 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 05:46:59.653182 systemd-logind[1577]: Watching system buttons on /dev/input/event2 (Power Button) Jan 15 05:46:59.653715 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 15 05:46:59.655545 systemd-logind[1577]: New seat seat0. Jan 15 05:46:59.660572 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 05:46:59.717619 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Jan 15 05:46:59.720405 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 05:46:59.724539 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 15 05:46:59.739685 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 05:46:59.815748 sshd_keygen[1584]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 05:46:59.820109 containerd[1600]: time="2026-01-15T05:46:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 15 05:46:59.820869 containerd[1600]: time="2026-01-15T05:46:59.820814409Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 15 05:46:59.837477 containerd[1600]: time="2026-01-15T05:46:59.837428091Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.885µs" Jan 15 05:46:59.837614 containerd[1600]: time="2026-01-15T05:46:59.837596016Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 15 05:46:59.837746 containerd[1600]: time="2026-01-15T05:46:59.837729324Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 15 05:46:59.837799 containerd[1600]: time="2026-01-15T05:46:59.837786480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 15 05:46:59.838010 containerd[1600]: time="2026-01-15T05:46:59.837992836Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 15 05:46:59.838078 containerd[1600]: time="2026-01-15T05:46:59.838058048Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 05:46:59.838258 containerd[1600]: time="2026-01-15T05:46:59.838236871Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 05:46:59.838314 containerd[1600]: time="2026-01-15T05:46:59.838301752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 05:46:59.838701 containerd[1600]: time="2026-01-15T05:46:59.838635596Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 05:46:59.838759 containerd[1600]: time="2026-01-15T05:46:59.838746613Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 05:46:59.838803 containerd[1600]: time="2026-01-15T05:46:59.838791136Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 05:46:59.838842 containerd[1600]: time="2026-01-15T05:46:59.838831852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 05:46:59.839106 containerd[1600]: time="2026-01-15T05:46:59.839080366Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 15 05:46:59.839159 containerd[1600]: time="2026-01-15T05:46:59.839147682Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 15 05:46:59.839296 containerd[1600]: time="2026-01-15T05:46:59.839279137Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 15 05:46:59.839698 containerd[1600]: time="2026-01-15T05:46:59.839635802Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 05:46:59.839845 containerd[1600]: time="2026-01-15T05:46:59.839827811Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 05:46:59.839893 containerd[1600]: time="2026-01-15T05:46:59.839881882Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 15 05:46:59.839991 containerd[1600]: time="2026-01-15T05:46:59.839974415Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 15 05:46:59.840555 containerd[1600]: time="2026-01-15T05:46:59.840535743Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 15 05:46:59.840739 containerd[1600]: time="2026-01-15T05:46:59.840720457Z" level=info msg="metadata content store policy set" policy=shared Jan 15 05:46:59.846116 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 05:46:59.849239 containerd[1600]: time="2026-01-15T05:46:59.849127729Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 15 05:46:59.849239 containerd[1600]: time="2026-01-15T05:46:59.849185668Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 05:46:59.849445 containerd[1600]: time="2026-01-15T05:46:59.849425645Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 15 05:46:59.849512 containerd[1600]: time="2026-01-15T05:46:59.849498461Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 15 05:46:59.849561 containerd[1600]: time="2026-01-15T05:46:59.849550379Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 15 05:46:59.849607 containerd[1600]: time="2026-01-15T05:46:59.849596104Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 15 05:46:59.849713 containerd[1600]: time="2026-01-15T05:46:59.849692022Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 15 05:46:59.849770 containerd[1600]: time="2026-01-15T05:46:59.849757615Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 15 05:46:59.849813 containerd[1600]: time="2026-01-15T05:46:59.849802239Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 15 05:46:59.849943 containerd[1600]: time="2026-01-15T05:46:59.849926821Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 15 05:46:59.850054 containerd[1600]: time="2026-01-15T05:46:59.850041175Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 15 05:46:59.850241 containerd[1600]: time="2026-01-15T05:46:59.850141421Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 15 05:46:59.850241 containerd[1600]: time="2026-01-15T05:46:59.850163844Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 15 05:46:59.850241 containerd[1600]: time="2026-01-15T05:46:59.850176818Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 15 05:46:59.850625 containerd[1600]: time="2026-01-15T05:46:59.850593596Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850741782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850770887Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850788620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850805501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850820208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850833193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850842400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850851607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850861656Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850870933Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850897733Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850940002Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.850984114Z" level=info msg="Start snapshots syncer" Jan 15 05:46:59.851814 containerd[1600]: time="2026-01-15T05:46:59.851029229Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 15 05:46:59.852065 containerd[1600]: time="2026-01-15T05:46:59.851388329Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 15 05:46:59.852065 containerd[1600]: time="2026-01-15T05:46:59.851487694Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 15 05:46:59.852239 containerd[1600]: time="2026-01-15T05:46:59.851535243Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 15 05:46:59.852239 containerd[1600]: time="2026-01-15T05:46:59.851676406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 15 05:46:59.852239 containerd[1600]: time="2026-01-15T05:46:59.851700432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 15 05:46:59.852239 containerd[1600]: time="2026-01-15T05:46:59.851710871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 15 05:46:59.852239 containerd[1600]: time="2026-01-15T05:46:59.851720078Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 15 05:46:59.852239 containerd[1600]: time="2026-01-15T05:46:59.851736940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 15 05:46:59.852239 containerd[1600]: time="2026-01-15T05:46:59.851746257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 15 05:46:59.852239 containerd[1600]: time="2026-01-15T05:46:59.851755384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 15 05:46:59.852239 containerd[1600]: time="2026-01-15T05:46:59.851765543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 15 05:46:59.852239 containerd[1600]: time="2026-01-15T05:46:59.851775091Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854586369Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854608840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854617957Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854702225Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854716181Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854731640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854742650Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854803363Z" level=info msg="runtime interface created" Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854811038Z" level=info msg="created NRI interface" Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854820736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854832818Z" level=info msg="Connect containerd service" Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.854851853Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 05:46:59.857200 containerd[1600]: time="2026-01-15T05:46:59.856523083Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 05:46:59.852287 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 05:46:59.875593 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 05:46:59.876007 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 05:46:59.881535 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 05:46:59.914097 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 05:46:59.920950 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 05:46:59.926847 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 05:46:59.931321 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 05:46:59.969799 containerd[1600]: time="2026-01-15T05:46:59.969730033Z" level=info msg="Start subscribing containerd event" Jan 15 05:46:59.970110 containerd[1600]: time="2026-01-15T05:46:59.970062694Z" level=info msg="Start recovering state" Jan 15 05:46:59.970469 containerd[1600]: time="2026-01-15T05:46:59.969792120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 05:46:59.970529 containerd[1600]: time="2026-01-15T05:46:59.970313061Z" level=info msg="Start event monitor" Jan 15 05:46:59.970529 containerd[1600]: time="2026-01-15T05:46:59.970504703Z" level=info msg="Start cni network conf syncer for default" Jan 15 05:46:59.970757 containerd[1600]: time="2026-01-15T05:46:59.970680221Z" level=info msg="Start streaming server" Jan 15 05:46:59.971539 containerd[1600]: time="2026-01-15T05:46:59.970915289Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 15 05:46:59.971539 containerd[1600]: time="2026-01-15T05:46:59.970938202Z" level=info msg="runtime interface starting up..." Jan 15 05:46:59.971539 containerd[1600]: time="2026-01-15T05:46:59.970949824Z" level=info msg="starting plugins..." Jan 15 05:46:59.971539 containerd[1600]: time="2026-01-15T05:46:59.970574200Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 05:46:59.971539 containerd[1600]: time="2026-01-15T05:46:59.970972777Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 15 05:46:59.971696 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 05:46:59.977560 containerd[1600]: time="2026-01-15T05:46:59.977533700Z" level=info msg="containerd successfully booted in 0.158259s" Jan 15 05:46:59.977961 tar[1595]: linux-amd64/README.md Jan 15 05:47:00.002157 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 05:47:00.021569 systemd-networkd[1507]: eth0: Gained IPv6LL Jan 15 05:47:00.025756 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 05:47:00.029629 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 05:47:00.034459 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 15 05:47:00.038404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:47:00.042511 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 05:47:00.079973 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 05:47:00.098898 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 15 05:47:00.099506 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 15 05:47:00.103563 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 05:47:00.859592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:47:00.863872 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 05:47:00.866207 systemd[1]: Startup finished in 7.972s (kernel) + 12.705s (initrd) + 5.727s (userspace) = 26.405s. Jan 15 05:47:00.881775 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 05:47:01.369694 kubelet[1697]: E0115 05:47:01.369539 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 05:47:01.373151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 05:47:01.373505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 05:47:01.374033 systemd[1]: kubelet.service: Consumed 964ms CPU time, 267.4M memory peak. Jan 15 05:47:02.758215 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 05:47:02.760048 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:54376.service - OpenSSH per-connection server daemon (10.0.0.1:54376). Jan 15 05:47:02.870027 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 54376 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:47:02.872448 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:47:02.882066 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 05:47:02.883939 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 05:47:02.891020 systemd-logind[1577]: New session 1 of user core. Jan 15 05:47:02.910322 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 05:47:02.914173 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 05:47:02.932488 (systemd)[1717]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:47:02.935877 systemd-logind[1577]: New session 2 of user core. Jan 15 05:47:03.074339 systemd[1717]: Queued start job for default target default.target. Jan 15 05:47:03.091851 systemd[1717]: Created slice app.slice - User Application Slice. Jan 15 05:47:03.091896 systemd[1717]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 15 05:47:03.091911 systemd[1717]: Reached target paths.target - Paths. Jan 15 05:47:03.091977 systemd[1717]: Reached target timers.target - Timers. Jan 15 05:47:03.093834 systemd[1717]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 05:47:03.094953 systemd[1717]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 15 05:47:03.106920 systemd[1717]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 05:47:03.107016 systemd[1717]: Reached target sockets.target - Sockets. Jan 15 05:47:03.109020 systemd[1717]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 15 05:47:03.109188 systemd[1717]: Reached target basic.target - Basic System. Jan 15 05:47:03.109300 systemd[1717]: Reached target default.target - Main User Target. Jan 15 05:47:03.109426 systemd[1717]: Startup finished in 167ms. Jan 15 05:47:03.109489 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 05:47:03.124581 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 05:47:03.150707 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:54386.service - OpenSSH per-connection server daemon (10.0.0.1:54386). Jan 15 05:47:03.219898 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 54386 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:47:03.221597 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:47:03.227420 systemd-logind[1577]: New session 3 of user core. Jan 15 05:47:03.240548 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 05:47:03.255068 sshd[1735]: Connection closed by 10.0.0.1 port 54386 Jan 15 05:47:03.255408 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Jan 15 05:47:03.269061 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:54386.service: Deactivated successfully. Jan 15 05:47:03.271195 systemd[1]: session-3.scope: Deactivated successfully. Jan 15 05:47:03.272259 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Jan 15 05:47:03.275494 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:54396.service - OpenSSH per-connection server daemon (10.0.0.1:54396). Jan 15 05:47:03.276463 systemd-logind[1577]: Removed session 3. Jan 15 05:47:03.336938 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 54396 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:47:03.338580 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:47:03.344728 systemd-logind[1577]: New session 4 of user core. Jan 15 05:47:03.352590 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 05:47:03.362954 sshd[1745]: Connection closed by 10.0.0.1 port 54396 Jan 15 05:47:03.363286 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jan 15 05:47:03.375250 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:54396.service: Deactivated successfully. Jan 15 05:47:03.377554 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 05:47:03.378870 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Jan 15 05:47:03.382217 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:54404.service - OpenSSH per-connection server daemon (10.0.0.1:54404). Jan 15 05:47:03.383216 systemd-logind[1577]: Removed session 4. Jan 15 05:47:03.463257 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 54404 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:47:03.464975 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:47:03.470991 systemd-logind[1577]: New session 5 of user core. Jan 15 05:47:03.480539 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 05:47:03.495821 sshd[1757]: Connection closed by 10.0.0.1 port 54404 Jan 15 05:47:03.496150 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jan 15 05:47:03.509095 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:54404.service: Deactivated successfully. Jan 15 05:47:03.511077 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 05:47:03.512399 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Jan 15 05:47:03.515155 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:54408.service - OpenSSH per-connection server daemon (10.0.0.1:54408). Jan 15 05:47:03.516305 systemd-logind[1577]: Removed session 5. Jan 15 05:47:03.584467 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 54408 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:47:03.586008 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:47:03.591658 systemd-logind[1577]: New session 6 of user core. Jan 15 05:47:03.610620 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 05:47:03.635564 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 05:47:03.636015 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:47:03.653483 sudo[1769]: pam_unix(sudo:session): session closed for user root Jan 15 05:47:03.655040 sshd[1768]: Connection closed by 10.0.0.1 port 54408 Jan 15 05:47:03.655548 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Jan 15 05:47:03.665082 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:54408.service: Deactivated successfully. Jan 15 05:47:03.666957 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 05:47:03.668158 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Jan 15 05:47:03.670832 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:54420.service - OpenSSH per-connection server daemon (10.0.0.1:54420). Jan 15 05:47:03.671902 systemd-logind[1577]: Removed session 6. Jan 15 05:47:03.731607 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 54420 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:47:03.733227 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:47:03.739142 systemd-logind[1577]: New session 7 of user core. Jan 15 05:47:03.749590 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 05:47:03.767170 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 05:47:03.767658 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:47:03.772511 sudo[1782]: pam_unix(sudo:session): session closed for user root Jan 15 05:47:03.782486 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 15 05:47:03.782973 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:47:03.794909 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 05:47:03.856000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 05:47:03.858286 augenrules[1806]: No rules Jan 15 05:47:03.860133 kernel: kauditd_printk_skb: 177 callbacks suppressed Jan 15 05:47:03.860171 kernel: audit: type=1305 audit(1768456023.856:224): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 15 05:47:03.860187 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 05:47:03.860594 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 05:47:03.861854 sudo[1781]: pam_unix(sudo:session): session closed for user root Jan 15 05:47:03.863533 sshd[1780]: Connection closed by 10.0.0.1 port 54420 Jan 15 05:47:03.856000 audit[1806]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd6453990 a2=420 a3=0 items=0 ppid=1787 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:03.865978 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jan 15 05:47:03.873019 kernel: audit: type=1300 audit(1768456023.856:224): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd6453990 a2=420 a3=0 items=0 ppid=1787 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:03.873059 kernel: audit: type=1327 audit(1768456023.856:224): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 05:47:03.856000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 15 05:47:03.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:03.882791 kernel: audit: type=1130 audit(1768456023.859:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:03.882833 kernel: audit: type=1131 audit(1768456023.859:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:03.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:03.859000 audit[1781]: USER_END pid=1781 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:47:03.895404 kernel: audit: type=1106 audit(1768456023.859:227): pid=1781 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:47:03.895443 kernel: audit: type=1104 audit(1768456023.859:228): pid=1781 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:47:03.859000 audit[1781]: CRED_DISP pid=1781 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:47:03.864000 audit[1776]: USER_END pid=1776 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:47:03.910727 kernel: audit: type=1106 audit(1768456023.864:229): pid=1776 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:47:03.910760 kernel: audit: type=1104 audit(1768456023.864:230): pid=1776 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:47:03.864000 audit[1776]: CRED_DISP pid=1776 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:47:03.932217 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:54420.service: Deactivated successfully. Jan 15 05:47:03.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.92:22-10.0.0.1:54420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:03.934231 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 05:47:03.935391 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Jan 15 05:47:03.938238 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:54426.service - OpenSSH per-connection server daemon (10.0.0.1:54426). Jan 15 05:47:03.939042 systemd-logind[1577]: Removed session 7. Jan 15 05:47:03.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.92:22-10.0.0.1:54426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:03.940405 kernel: audit: type=1131 audit(1768456023.931:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.92:22-10.0.0.1:54420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:04.004000 audit[1815]: USER_ACCT pid=1815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:47:04.006422 sshd[1815]: Accepted publickey for core from 10.0.0.1 port 54426 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:47:04.006000 audit[1815]: CRED_ACQ pid=1815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:47:04.006000 audit[1815]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6f79ac80 a2=3 a3=0 items=0 ppid=1 pid=1815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:04.006000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:47:04.008200 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:47:04.013606 systemd-logind[1577]: New session 8 of user core. Jan 15 05:47:04.031510 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 05:47:04.032000 audit[1815]: USER_START pid=1815 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:47:04.034000 audit[1819]: CRED_ACQ pid=1819 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:47:04.046000 audit[1820]: USER_ACCT pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:47:04.047973 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 05:47:04.046000 audit[1820]: CRED_REFR pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:47:04.048475 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 05:47:04.047000 audit[1820]: USER_START pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:47:04.374307 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 05:47:04.401752 (dockerd)[1841]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 05:47:04.656253 dockerd[1841]: time="2026-01-15T05:47:04.656054559Z" level=info msg="Starting up" Jan 15 05:47:04.657006 dockerd[1841]: time="2026-01-15T05:47:04.656960501Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 15 05:47:04.673161 dockerd[1841]: time="2026-01-15T05:47:04.673106848Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 15 05:47:04.892112 dockerd[1841]: time="2026-01-15T05:47:04.892005847Z" level=info msg="Loading containers: start." Jan 15 05:47:04.904403 kernel: Initializing XFRM netlink socket Jan 15 05:47:04.984000 audit[1895]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:04.984000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcaf68d7b0 a2=0 a3=0 items=0 ppid=1841 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:04.984000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 05:47:04.988000 audit[1897]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:04.988000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffde22a7710 a2=0 a3=0 items=0 ppid=1841 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:04.988000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 05:47:04.992000 audit[1899]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:04.992000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff75817100 a2=0 a3=0 items=0 ppid=1841 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:04.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 05:47:04.996000 audit[1901]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:04.996000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff68625eb0 a2=0 a3=0 items=0 ppid=1841 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:04.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 05:47:05.000000 audit[1903]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.000000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc148c4cc0 a2=0 a3=0 items=0 ppid=1841 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 05:47:05.004000 audit[1905]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.004000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc0dedbd60 a2=0 a3=0 items=0 ppid=1841 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.004000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:47:05.008000 audit[1907]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.008000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdd2a4cb90 a2=0 a3=0 items=0 ppid=1841 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.008000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 05:47:05.013000 audit[1909]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.013000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff79269bf0 a2=0 a3=0 items=0 ppid=1841 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.013000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 05:47:05.049000 audit[1912]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.049000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd0332e070 a2=0 a3=0 items=0 ppid=1841 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 15 05:47:05.053000 audit[1914]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.053000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff0fb4b160 a2=0 a3=0 items=0 ppid=1841 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.053000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 05:47:05.058000 audit[1916]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.058000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff87d6bfc0 a2=0 a3=0 items=0 ppid=1841 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.058000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 05:47:05.061000 audit[1918]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.061000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe5f373580 a2=0 a3=0 items=0 ppid=1841 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.061000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:47:05.065000 audit[1920]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.065000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffe6abe0b0 a2=0 a3=0 items=0 ppid=1841 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.065000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 05:47:05.133000 audit[1950]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.133000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe56bff5b0 a2=0 a3=0 items=0 ppid=1841 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.133000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 15 05:47:05.136000 audit[1952]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.136000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffee96a0b20 a2=0 a3=0 items=0 ppid=1841 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.136000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 15 05:47:05.140000 audit[1954]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.140000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa36c68c0 a2=0 a3=0 items=0 ppid=1841 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.140000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 15 05:47:05.144000 audit[1956]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.144000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3eaa8fa0 a2=0 a3=0 items=0 ppid=1841 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.144000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 15 05:47:05.147000 audit[1958]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.147000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed0775610 a2=0 a3=0 items=0 ppid=1841 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 15 05:47:05.151000 audit[1960]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.151000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc394679d0 a2=0 a3=0 items=0 ppid=1841 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:47:05.154000 audit[1962]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.154000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc76e96cc0 a2=0 a3=0 items=0 ppid=1841 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.154000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 05:47:05.159000 audit[1964]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.159000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff3464a440 a2=0 a3=0 items=0 ppid=1841 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 15 05:47:05.163000 audit[1966]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.163000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe955926b0 a2=0 a3=0 items=0 ppid=1841 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.163000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 15 05:47:05.167000 audit[1968]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.167000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe9fc6a950 a2=0 a3=0 items=0 ppid=1841 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 15 05:47:05.171000 audit[1970]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.171000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc06fd41a0 a2=0 a3=0 items=0 ppid=1841 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 15 05:47:05.175000 audit[1972]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.175000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffb55d3ed0 a2=0 a3=0 items=0 ppid=1841 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 15 05:47:05.178000 audit[1974]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.178000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe5ce63ed0 a2=0 a3=0 items=0 ppid=1841 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 15 05:47:05.189000 audit[1979]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.189000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffead9e73d0 a2=0 a3=0 items=0 ppid=1841 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 05:47:05.194000 audit[1981]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.194000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe70e0d710 a2=0 a3=0 items=0 ppid=1841 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.194000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 05:47:05.198000 audit[1983]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.198000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff9d5dbe90 a2=0 a3=0 items=0 ppid=1841 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.198000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 05:47:05.201000 audit[1985]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.201000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff1e421e40 a2=0 a3=0 items=0 ppid=1841 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.201000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 15 05:47:05.205000 audit[1987]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.205000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcefe81af0 a2=0 a3=0 items=0 ppid=1841 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.205000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 15 05:47:05.209000 audit[1989]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:05.209000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffde30427f0 a2=0 a3=0 items=0 ppid=1841 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.209000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 15 05:47:05.228000 audit[1993]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.228000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc3227a140 a2=0 a3=0 items=0 ppid=1841 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 15 05:47:05.232000 audit[1995]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.232000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff2a9db830 a2=0 a3=0 items=0 ppid=1841 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 15 05:47:05.248000 audit[2003]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.248000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc7a6cfcd0 a2=0 a3=0 items=0 ppid=1841 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.248000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 15 05:47:05.263000 audit[2009]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.263000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffffa70b070 a2=0 a3=0 items=0 ppid=1841 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.263000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 15 05:47:05.267000 audit[2011]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.267000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc93366620 a2=0 a3=0 items=0 ppid=1841 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.267000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 15 05:47:05.271000 audit[2013]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.271000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffca1fc7540 a2=0 a3=0 items=0 ppid=1841 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 15 05:47:05.275000 audit[2015]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.275000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd4fae11f0 a2=0 a3=0 items=0 ppid=1841 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.275000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 15 05:47:05.278000 audit[2017]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:05.278000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe1923e120 a2=0 a3=0 items=0 ppid=1841 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:05.278000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 15 05:47:05.280735 systemd-networkd[1507]: docker0: Link UP Jan 15 05:47:05.285960 dockerd[1841]: time="2026-01-15T05:47:05.285885218Z" level=info msg="Loading containers: done." Jan 15 05:47:05.303080 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck403535858-merged.mount: Deactivated successfully. Jan 15 05:47:05.308582 dockerd[1841]: time="2026-01-15T05:47:05.308508620Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 05:47:05.308665 dockerd[1841]: time="2026-01-15T05:47:05.308615880Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 15 05:47:05.308790 dockerd[1841]: time="2026-01-15T05:47:05.308745712Z" level=info msg="Initializing buildkit" Jan 15 05:47:05.343862 dockerd[1841]: time="2026-01-15T05:47:05.343810008Z" level=info msg="Completed buildkit initialization" Jan 15 05:47:05.349845 dockerd[1841]: time="2026-01-15T05:47:05.349762251Z" level=info msg="Daemon has completed initialization" Jan 15 05:47:05.349975 dockerd[1841]: time="2026-01-15T05:47:05.349921138Z" level=info msg="API listen on /run/docker.sock" Jan 15 05:47:05.350107 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 05:47:05.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:06.042570 containerd[1600]: time="2026-01-15T05:47:06.042466467Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 15 05:47:06.564085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463014609.mount: Deactivated successfully. Jan 15 05:47:07.504595 containerd[1600]: time="2026-01-15T05:47:07.504509317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:07.505619 containerd[1600]: time="2026-01-15T05:47:07.505585096Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Jan 15 05:47:07.507230 containerd[1600]: time="2026-01-15T05:47:07.507166688Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:07.510801 containerd[1600]: time="2026-01-15T05:47:07.510690470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:07.511679 containerd[1600]: time="2026-01-15T05:47:07.511624849Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.469100514s" Jan 15 05:47:07.511679 containerd[1600]: time="2026-01-15T05:47:07.511674281Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 15 05:47:07.512441 containerd[1600]: time="2026-01-15T05:47:07.512314195Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 15 05:47:08.882013 containerd[1600]: time="2026-01-15T05:47:08.881919452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:08.883106 containerd[1600]: time="2026-01-15T05:47:08.883057261Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 15 05:47:08.884338 containerd[1600]: time="2026-01-15T05:47:08.884303206Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:08.886967 containerd[1600]: time="2026-01-15T05:47:08.886891681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:08.887925 containerd[1600]: time="2026-01-15T05:47:08.887837326Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.375497503s" Jan 15 05:47:08.887925 containerd[1600]: time="2026-01-15T05:47:08.887880947Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 15 05:47:08.888556 containerd[1600]: time="2026-01-15T05:47:08.888517155Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 15 05:47:10.030463 containerd[1600]: time="2026-01-15T05:47:10.030404409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:10.031314 containerd[1600]: time="2026-01-15T05:47:10.031284099Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 15 05:47:10.032764 containerd[1600]: time="2026-01-15T05:47:10.032705146Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:10.035441 containerd[1600]: time="2026-01-15T05:47:10.035408583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:10.036512 containerd[1600]: time="2026-01-15T05:47:10.036449711Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.147893484s" Jan 15 05:47:10.036512 containerd[1600]: time="2026-01-15T05:47:10.036490056Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 15 05:47:10.037102 containerd[1600]: time="2026-01-15T05:47:10.037072856Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 15 05:47:10.943789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1391506452.mount: Deactivated successfully. Jan 15 05:47:11.256785 containerd[1600]: time="2026-01-15T05:47:11.256590611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:11.257929 containerd[1600]: time="2026-01-15T05:47:11.257878325Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=0" Jan 15 05:47:11.258933 containerd[1600]: time="2026-01-15T05:47:11.258883431Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:11.261005 containerd[1600]: time="2026-01-15T05:47:11.260953414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:11.261514 containerd[1600]: time="2026-01-15T05:47:11.261466287Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.224366691s" Jan 15 05:47:11.261514 containerd[1600]: time="2026-01-15T05:47:11.261506952Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 15 05:47:11.262156 containerd[1600]: time="2026-01-15T05:47:11.262024734Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 15 05:47:11.544817 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 05:47:11.547105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:47:11.761581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:47:11.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:11.763560 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 15 05:47:11.763629 kernel: audit: type=1130 audit(1768456031.760:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:11.770152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359120034.mount: Deactivated successfully. Jan 15 05:47:11.785890 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 05:47:11.928577 kubelet[2145]: E0115 05:47:11.928330 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 05:47:11.934261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 05:47:11.934534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 05:47:11.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:47:11.935022 systemd[1]: kubelet.service: Consumed 260ms CPU time, 111.9M memory peak. Jan 15 05:47:11.942421 kernel: audit: type=1131 audit(1768456031.933:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:47:12.531498 containerd[1600]: time="2026-01-15T05:47:12.531414714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:12.532216 containerd[1600]: time="2026-01-15T05:47:12.532180209Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Jan 15 05:47:12.533453 containerd[1600]: time="2026-01-15T05:47:12.533403175Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:12.536094 containerd[1600]: time="2026-01-15T05:47:12.536032943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:12.536932 containerd[1600]: time="2026-01-15T05:47:12.536891331Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.274829648s" Jan 15 05:47:12.536932 containerd[1600]: time="2026-01-15T05:47:12.536929051Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 15 05:47:12.537464 containerd[1600]: time="2026-01-15T05:47:12.537432543Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 05:47:12.922511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1098905052.mount: Deactivated successfully. Jan 15 05:47:12.928742 containerd[1600]: time="2026-01-15T05:47:12.928677169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 05:47:12.929963 containerd[1600]: time="2026-01-15T05:47:12.929841712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=316581" Jan 15 05:47:12.931029 containerd[1600]: time="2026-01-15T05:47:12.930939820Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 05:47:12.933215 containerd[1600]: time="2026-01-15T05:47:12.933181275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 05:47:12.934001 containerd[1600]: time="2026-01-15T05:47:12.933954449Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 396.487412ms" Jan 15 05:47:12.934001 containerd[1600]: time="2026-01-15T05:47:12.933991067Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 15 05:47:12.934638 containerd[1600]: time="2026-01-15T05:47:12.934541372Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 15 05:47:13.339512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356653394.mount: Deactivated successfully. Jan 15 05:47:15.177045 containerd[1600]: time="2026-01-15T05:47:15.176947558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:15.177684 containerd[1600]: time="2026-01-15T05:47:15.177653098Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Jan 15 05:47:15.179176 containerd[1600]: time="2026-01-15T05:47:15.179086443Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:15.181590 containerd[1600]: time="2026-01-15T05:47:15.181539372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:15.182769 containerd[1600]: time="2026-01-15T05:47:15.182716566Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.248129909s" Jan 15 05:47:15.182769 containerd[1600]: time="2026-01-15T05:47:15.182759316Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 15 05:47:19.087579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:47:19.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:19.087812 systemd[1]: kubelet.service: Consumed 260ms CPU time, 111.9M memory peak. Jan 15 05:47:19.090482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:47:19.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:19.101171 kernel: audit: type=1130 audit(1768456039.086:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:19.101278 kernel: audit: type=1131 audit(1768456039.086:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:19.129964 systemd[1]: Reload requested from client PID 2299 ('systemctl') (unit session-8.scope)... Jan 15 05:47:19.130001 systemd[1]: Reloading... Jan 15 05:47:19.211419 zram_generator::config[2340]: No configuration found. Jan 15 05:47:19.494399 systemd[1]: Reloading finished in 363 ms. Jan 15 05:47:19.527640 kernel: audit: type=1334 audit(1768456039.522:286): prog-id=63 op=LOAD Jan 15 05:47:19.527751 kernel: audit: type=1334 audit(1768456039.522:287): prog-id=59 op=UNLOAD Jan 15 05:47:19.522000 audit: BPF prog-id=63 op=LOAD Jan 15 05:47:19.522000 audit: BPF prog-id=59 op=UNLOAD Jan 15 05:47:19.522000 audit: BPF prog-id=64 op=LOAD Jan 15 05:47:19.529990 kernel: audit: type=1334 audit(1768456039.522:288): prog-id=64 op=LOAD Jan 15 05:47:19.530087 kernel: audit: type=1334 audit(1768456039.522:289): prog-id=65 op=LOAD Jan 15 05:47:19.522000 audit: BPF prog-id=65 op=LOAD Jan 15 05:47:19.531967 kernel: audit: type=1334 audit(1768456039.522:290): prog-id=44 op=UNLOAD Jan 15 05:47:19.522000 audit: BPF prog-id=44 op=UNLOAD Jan 15 05:47:19.534008 kernel: audit: type=1334 audit(1768456039.522:291): prog-id=45 op=UNLOAD Jan 15 05:47:19.522000 audit: BPF prog-id=45 op=UNLOAD Jan 15 05:47:19.536180 kernel: audit: type=1334 audit(1768456039.525:292): prog-id=66 op=LOAD Jan 15 05:47:19.525000 audit: BPF prog-id=66 op=LOAD Jan 15 05:47:19.539202 kernel: audit: type=1334 audit(1768456039.525:293): prog-id=55 op=UNLOAD Jan 15 05:47:19.525000 audit: BPF prog-id=55 op=UNLOAD Jan 15 05:47:19.525000 audit: BPF prog-id=67 op=LOAD Jan 15 05:47:19.525000 audit: BPF prog-id=68 op=LOAD Jan 15 05:47:19.525000 audit: BPF prog-id=56 op=UNLOAD Jan 15 05:47:19.525000 audit: BPF prog-id=57 op=UNLOAD Jan 15 05:47:19.527000 audit: BPF prog-id=69 op=LOAD Jan 15 05:47:19.550000 audit: BPF prog-id=43 op=UNLOAD Jan 15 05:47:19.551000 audit: BPF prog-id=70 op=LOAD Jan 15 05:47:19.551000 audit: BPF prog-id=49 op=UNLOAD Jan 15 05:47:19.551000 audit: BPF prog-id=71 op=LOAD Jan 15 05:47:19.551000 audit: BPF prog-id=72 op=LOAD Jan 15 05:47:19.551000 audit: BPF prog-id=50 op=UNLOAD Jan 15 05:47:19.551000 audit: BPF prog-id=51 op=UNLOAD Jan 15 05:47:19.552000 audit: BPF prog-id=73 op=LOAD Jan 15 05:47:19.552000 audit: BPF prog-id=46 op=UNLOAD Jan 15 05:47:19.552000 audit: BPF prog-id=74 op=LOAD Jan 15 05:47:19.552000 audit: BPF prog-id=75 op=LOAD Jan 15 05:47:19.553000 audit: BPF prog-id=47 op=UNLOAD Jan 15 05:47:19.553000 audit: BPF prog-id=48 op=UNLOAD Jan 15 05:47:19.554000 audit: BPF prog-id=76 op=LOAD Jan 15 05:47:19.554000 audit: BPF prog-id=58 op=UNLOAD Jan 15 05:47:19.557000 audit: BPF prog-id=77 op=LOAD Jan 15 05:47:19.557000 audit: BPF prog-id=60 op=UNLOAD Jan 15 05:47:19.557000 audit: BPF prog-id=78 op=LOAD Jan 15 05:47:19.557000 audit: BPF prog-id=79 op=LOAD Jan 15 05:47:19.557000 audit: BPF prog-id=61 op=UNLOAD Jan 15 05:47:19.557000 audit: BPF prog-id=62 op=UNLOAD Jan 15 05:47:19.558000 audit: BPF prog-id=80 op=LOAD Jan 15 05:47:19.558000 audit: BPF prog-id=52 op=UNLOAD Jan 15 05:47:19.558000 audit: BPF prog-id=81 op=LOAD Jan 15 05:47:19.558000 audit: BPF prog-id=82 op=LOAD Jan 15 05:47:19.558000 audit: BPF prog-id=53 op=UNLOAD Jan 15 05:47:19.558000 audit: BPF prog-id=54 op=UNLOAD Jan 15 05:47:19.587311 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 05:47:19.587509 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 05:47:19.587953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:47:19.588040 systemd[1]: kubelet.service: Consumed 164ms CPU time, 98.7M memory peak. Jan 15 05:47:19.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 15 05:47:19.590220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:47:19.789737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:47:19.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:19.809018 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 05:47:19.876423 kubelet[2392]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 05:47:19.876423 kubelet[2392]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 05:47:19.876423 kubelet[2392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 05:47:19.876862 kubelet[2392]: I0115 05:47:19.876529 2392 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 05:47:20.381617 kubelet[2392]: I0115 05:47:20.381540 2392 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 15 05:47:20.381617 kubelet[2392]: I0115 05:47:20.381602 2392 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 05:47:20.381980 kubelet[2392]: I0115 05:47:20.381922 2392 server.go:956] "Client rotation is on, will bootstrap in background" Jan 15 05:47:20.401641 kubelet[2392]: E0115 05:47:20.401565 2392 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 15 05:47:20.405130 kubelet[2392]: I0115 05:47:20.404977 2392 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 05:47:20.412496 kubelet[2392]: I0115 05:47:20.412432 2392 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 05:47:20.428099 kubelet[2392]: I0115 05:47:20.428018 2392 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 05:47:20.428803 kubelet[2392]: I0115 05:47:20.428752 2392 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 05:47:20.429062 kubelet[2392]: I0115 05:47:20.428788 2392 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 05:47:20.429062 kubelet[2392]: I0115 05:47:20.429040 2392 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 05:47:20.429062 kubelet[2392]: I0115 05:47:20.429052 2392 container_manager_linux.go:303] "Creating device plugin manager" Jan 15 05:47:20.429237 kubelet[2392]: I0115 05:47:20.429222 2392 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:47:20.431569 kubelet[2392]: I0115 05:47:20.431510 2392 kubelet.go:480] "Attempting to sync node with API server" Jan 15 05:47:20.431569 kubelet[2392]: I0115 05:47:20.431550 2392 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 05:47:20.431569 kubelet[2392]: I0115 05:47:20.431579 2392 kubelet.go:386] "Adding apiserver pod source" Jan 15 05:47:20.433173 kubelet[2392]: I0115 05:47:20.433131 2392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 05:47:20.436641 kubelet[2392]: E0115 05:47:20.436332 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 15 05:47:20.436641 kubelet[2392]: E0115 05:47:20.436329 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 15 05:47:20.437743 kubelet[2392]: I0115 05:47:20.437649 2392 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 05:47:20.438539 kubelet[2392]: I0115 05:47:20.438457 2392 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 15 05:47:20.439589 kubelet[2392]: W0115 05:47:20.439523 2392 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 05:47:20.444266 kubelet[2392]: I0115 05:47:20.444201 2392 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 05:47:20.444433 kubelet[2392]: I0115 05:47:20.444339 2392 server.go:1289] "Started kubelet" Jan 15 05:47:20.449561 kubelet[2392]: I0115 05:47:20.449512 2392 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 05:47:20.451132 kubelet[2392]: I0115 05:47:20.451104 2392 server.go:317] "Adding debug handlers to kubelet server" Jan 15 05:47:20.452579 kubelet[2392]: I0115 05:47:20.452522 2392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 05:47:20.452663 kubelet[2392]: E0115 05:47:20.451290 2392 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ad1662b92f968 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-15 05:47:20.44426276 +0000 UTC m=+0.627165354,LastTimestamp:2026-01-15 05:47:20.44426276 +0000 UTC m=+0.627165354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 15 05:47:20.453024 kubelet[2392]: I0115 05:47:20.453004 2392 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 05:47:20.453721 kubelet[2392]: I0115 05:47:20.451193 2392 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 05:47:20.454093 kubelet[2392]: I0115 05:47:20.454026 2392 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 05:47:20.456032 kubelet[2392]: E0115 05:47:20.455969 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 15 05:47:20.456032 kubelet[2392]: I0115 05:47:20.456025 2392 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 05:47:20.456457 kubelet[2392]: I0115 05:47:20.456424 2392 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 05:47:20.456570 kubelet[2392]: I0115 05:47:20.456546 2392 reconciler.go:26] "Reconciler: start to sync state" Jan 15 05:47:20.457305 kubelet[2392]: E0115 05:47:20.457261 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 15 05:47:20.457523 kubelet[2392]: E0115 05:47:20.457481 2392 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 05:47:20.460035 kubelet[2392]: I0115 05:47:20.459946 2392 factory.go:223] Registration of the containerd container factory successfully Jan 15 05:47:20.460035 kubelet[2392]: I0115 05:47:20.459973 2392 factory.go:223] Registration of the systemd container factory successfully Jan 15 05:47:20.460280 kubelet[2392]: I0115 05:47:20.460247 2392 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 05:47:20.462392 kubelet[2392]: E0115 05:47:20.461040 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Jan 15 05:47:20.464000 audit[2411]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:20.464000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd5d0ed1e0 a2=0 a3=0 items=0 ppid=2392 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.464000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 05:47:20.466000 audit[2413]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:20.466000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff48826b0 a2=0 a3=0 items=0 ppid=2392 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 05:47:20.470000 audit[2415]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:20.470000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff64f0c160 a2=0 a3=0 items=0 ppid=2392 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.470000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:47:20.478000 audit[2417]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:20.478000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd45724660 a2=0 a3=0 items=0 ppid=2392 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:47:20.487951 kubelet[2392]: I0115 05:47:20.487877 2392 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 05:47:20.487951 kubelet[2392]: I0115 05:47:20.487907 2392 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 05:47:20.487951 kubelet[2392]: I0115 05:47:20.487928 2392 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:47:20.491458 kubelet[2392]: I0115 05:47:20.491402 2392 policy_none.go:49] "None policy: Start" Jan 15 05:47:20.491458 kubelet[2392]: I0115 05:47:20.491445 2392 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 05:47:20.491458 kubelet[2392]: I0115 05:47:20.491460 2392 state_mem.go:35] "Initializing new in-memory state store" Jan 15 05:47:20.491000 audit[2422]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:20.491000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffa53177b0 a2=0 a3=0 items=0 ppid=2392 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 15 05:47:20.493667 kubelet[2392]: I0115 05:47:20.493622 2392 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 15 05:47:20.493000 audit[2424]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:20.493000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffffa4b6640 a2=0 a3=0 items=0 ppid=2392 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 15 05:47:20.495826 kubelet[2392]: I0115 05:47:20.495800 2392 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 15 05:47:20.495981 kubelet[2392]: I0115 05:47:20.495832 2392 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 15 05:47:20.495981 kubelet[2392]: I0115 05:47:20.495887 2392 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 05:47:20.495981 kubelet[2392]: I0115 05:47:20.495894 2392 kubelet.go:2436] "Starting kubelet main sync loop" Jan 15 05:47:20.495981 kubelet[2392]: E0115 05:47:20.495942 2392 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 05:47:20.494000 audit[2425]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:20.494000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc231c1ae0 a2=0 a3=0 items=0 ppid=2392 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.494000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 05:47:20.496000 audit[2426]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:20.496000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb6daff80 a2=0 a3=0 items=0 ppid=2392 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 15 05:47:20.498499 kubelet[2392]: E0115 05:47:20.498473 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 15 05:47:20.498000 audit[2427]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:20.498000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc50a36e40 a2=0 a3=0 items=0 ppid=2392 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 05:47:20.500535 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 05:47:20.500000 audit[2428]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:20.500000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce3771020 a2=0 a3=0 items=0 ppid=2392 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 15 05:47:20.501000 audit[2429]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:20.501000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc91fef930 a2=0 a3=0 items=0 ppid=2392 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 05:47:20.502000 audit[2430]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:20.502000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd2d85760 a2=0 a3=0 items=0 ppid=2392 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:20.502000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 15 05:47:20.513781 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 05:47:20.517732 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 05:47:20.530388 kubelet[2392]: E0115 05:47:20.530213 2392 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 15 05:47:20.530685 kubelet[2392]: I0115 05:47:20.530648 2392 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 05:47:20.530756 kubelet[2392]: I0115 05:47:20.530691 2392 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 05:47:20.531239 kubelet[2392]: I0115 05:47:20.531152 2392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 05:47:20.533125 kubelet[2392]: E0115 05:47:20.533035 2392 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 05:47:20.533183 kubelet[2392]: E0115 05:47:20.533169 2392 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 15 05:47:20.613328 systemd[1]: Created slice kubepods-burstable-pode8c048af14328a28b4eeac7d5b7fbee1.slice - libcontainer container kubepods-burstable-pode8c048af14328a28b4eeac7d5b7fbee1.slice. Jan 15 05:47:20.632801 kubelet[2392]: I0115 05:47:20.632631 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:47:20.634504 kubelet[2392]: E0115 05:47:20.634439 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 15 05:47:20.642452 kubelet[2392]: E0115 05:47:20.642337 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:47:20.647677 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 15 05:47:20.650380 kubelet[2392]: E0115 05:47:20.650314 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:47:20.653089 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 15 05:47:20.655739 kubelet[2392]: E0115 05:47:20.655658 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:47:20.657895 kubelet[2392]: I0115 05:47:20.657819 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8c048af14328a28b4eeac7d5b7fbee1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e8c048af14328a28b4eeac7d5b7fbee1\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:47:20.657986 kubelet[2392]: I0115 05:47:20.657902 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8c048af14328a28b4eeac7d5b7fbee1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e8c048af14328a28b4eeac7d5b7fbee1\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:47:20.657986 kubelet[2392]: I0115 05:47:20.657931 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:20.657986 kubelet[2392]: I0115 05:47:20.657945 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:20.657986 kubelet[2392]: I0115 05:47:20.657961 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:20.657986 kubelet[2392]: I0115 05:47:20.657983 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8c048af14328a28b4eeac7d5b7fbee1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e8c048af14328a28b4eeac7d5b7fbee1\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:47:20.658157 kubelet[2392]: I0115 05:47:20.657998 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:20.658157 kubelet[2392]: I0115 05:47:20.658011 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:20.658157 kubelet[2392]: I0115 05:47:20.658024 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 15 05:47:20.661873 kubelet[2392]: E0115 05:47:20.661769 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Jan 15 05:47:20.661873 kubelet[2392]: E0115 05:47:20.661754 2392 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ad1662b92f968 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-15 05:47:20.44426276 +0000 UTC m=+0.627165354,LastTimestamp:2026-01-15 05:47:20.44426276 +0000 UTC m=+0.627165354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 15 05:47:20.836526 kubelet[2392]: I0115 05:47:20.836438 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:47:20.836936 kubelet[2392]: E0115 05:47:20.836872 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 15 05:47:20.943962 kubelet[2392]: E0115 05:47:20.943776 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:20.944836 containerd[1600]: time="2026-01-15T05:47:20.944704740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e8c048af14328a28b4eeac7d5b7fbee1,Namespace:kube-system,Attempt:0,}" Jan 15 05:47:20.951139 kubelet[2392]: E0115 05:47:20.951085 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:20.951884 containerd[1600]: time="2026-01-15T05:47:20.951678434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 15 05:47:20.956307 kubelet[2392]: E0115 05:47:20.956227 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:20.956923 containerd[1600]: time="2026-01-15T05:47:20.956890616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 15 05:47:21.007250 containerd[1600]: time="2026-01-15T05:47:21.001963466Z" level=info msg="connecting to shim f72dce97e8e74f61141ed9f99a2da678614a82c8e1ab50f8f9a69b4521b71523" address="unix:///run/containerd/s/f5363703a8fefed015eb51d9550df9ab9c0564e2eb652e70efd12affaa9653fc" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:47:21.030132 containerd[1600]: time="2026-01-15T05:47:21.020060675Z" level=info msg="connecting to shim be3fe15aad9400bda721704dc49cdad6b3dfa66c2f39f504f2e7fd24f78b1fc5" address="unix:///run/containerd/s/b52836ede4cf5c9a72df913266881cf46f6fcd81902c32468bd318a9b925e1eb" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:47:21.073827 containerd[1600]: time="2026-01-15T05:47:21.066311704Z" level=info msg="connecting to shim 15259d8dcbddcfe3f11a51e16315cb9c7f8d2540b5076fe110997301264d848a" address="unix:///run/containerd/s/8520fcac4b453a78b76b5d81849c4f6751300e54796d2d18e0ab8893f6c31209" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:47:21.074031 kubelet[2392]: E0115 05:47:21.073762 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Jan 15 05:47:21.089242 systemd[1]: Started cri-containerd-f72dce97e8e74f61141ed9f99a2da678614a82c8e1ab50f8f9a69b4521b71523.scope - libcontainer container f72dce97e8e74f61141ed9f99a2da678614a82c8e1ab50f8f9a69b4521b71523. Jan 15 05:47:21.099536 systemd[1]: Started cri-containerd-be3fe15aad9400bda721704dc49cdad6b3dfa66c2f39f504f2e7fd24f78b1fc5.scope - libcontainer container be3fe15aad9400bda721704dc49cdad6b3dfa66c2f39f504f2e7fd24f78b1fc5. Jan 15 05:47:21.120603 systemd[1]: Started cri-containerd-15259d8dcbddcfe3f11a51e16315cb9c7f8d2540b5076fe110997301264d848a.scope - libcontainer container 15259d8dcbddcfe3f11a51e16315cb9c7f8d2540b5076fe110997301264d848a. Jan 15 05:47:21.123000 audit: BPF prog-id=83 op=LOAD Jan 15 05:47:21.124000 audit: BPF prog-id=84 op=LOAD Jan 15 05:47:21.124000 audit[2488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2455 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265336665313561616439343030626461373231373034646334396364 Jan 15 05:47:21.125000 audit: BPF prog-id=84 op=UNLOAD Jan 15 05:47:21.125000 audit[2488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265336665313561616439343030626461373231373034646334396364 Jan 15 05:47:21.125000 audit: BPF prog-id=85 op=LOAD Jan 15 05:47:21.126000 audit: BPF prog-id=86 op=LOAD Jan 15 05:47:21.126000 audit[2488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2455 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265336665313561616439343030626461373231373034646334396364 Jan 15 05:47:21.126000 audit: BPF prog-id=87 op=LOAD Jan 15 05:47:21.126000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2451 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326463653937653865373466363131343165643966393961326461 Jan 15 05:47:21.126000 audit: BPF prog-id=87 op=UNLOAD Jan 15 05:47:21.126000 audit[2476]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326463653937653865373466363131343165643966393961326461 Jan 15 05:47:21.126000 audit: BPF prog-id=88 op=LOAD Jan 15 05:47:21.126000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2451 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326463653937653865373466363131343165643966393961326461 Jan 15 05:47:21.127000 audit: BPF prog-id=89 op=LOAD Jan 15 05:47:21.127000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2451 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326463653937653865373466363131343165643966393961326461 Jan 15 05:47:21.127000 audit: BPF prog-id=89 op=UNLOAD Jan 15 05:47:21.127000 audit[2476]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326463653937653865373466363131343165643966393961326461 Jan 15 05:47:21.127000 audit: BPF prog-id=88 op=UNLOAD Jan 15 05:47:21.127000 audit[2476]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326463653937653865373466363131343165643966393961326461 Jan 15 05:47:21.127000 audit: BPF prog-id=90 op=LOAD Jan 15 05:47:21.127000 audit[2476]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2451 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637326463653937653865373466363131343165643966393961326461 Jan 15 05:47:21.128000 audit: BPF prog-id=91 op=LOAD Jan 15 05:47:21.128000 audit[2488]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2455 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265336665313561616439343030626461373231373034646334396364 Jan 15 05:47:21.128000 audit: BPF prog-id=91 op=UNLOAD Jan 15 05:47:21.128000 audit[2488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265336665313561616439343030626461373231373034646334396364 Jan 15 05:47:21.128000 audit: BPF prog-id=86 op=UNLOAD Jan 15 05:47:21.128000 audit[2488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265336665313561616439343030626461373231373034646334396364 Jan 15 05:47:21.129000 audit: BPF prog-id=92 op=LOAD Jan 15 05:47:21.129000 audit[2488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2455 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265336665313561616439343030626461373231373034646334396364 Jan 15 05:47:21.136000 audit: BPF prog-id=93 op=LOAD Jan 15 05:47:21.137000 audit: BPF prog-id=94 op=LOAD Jan 15 05:47:21.137000 audit[2513]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2490 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135323539643864636264646366653366313161353165313633313563 Jan 15 05:47:21.137000 audit: BPF prog-id=94 op=UNLOAD Jan 15 05:47:21.137000 audit[2513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135323539643864636264646366653366313161353165313633313563 Jan 15 05:47:21.137000 audit: BPF prog-id=95 op=LOAD Jan 15 05:47:21.137000 audit[2513]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2490 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135323539643864636264646366653366313161353165313633313563 Jan 15 05:47:21.137000 audit: BPF prog-id=96 op=LOAD Jan 15 05:47:21.137000 audit[2513]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2490 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135323539643864636264646366653366313161353165313633313563 Jan 15 05:47:21.137000 audit: BPF prog-id=96 op=UNLOAD Jan 15 05:47:21.137000 audit[2513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135323539643864636264646366653366313161353165313633313563 Jan 15 05:47:21.137000 audit: BPF prog-id=95 op=UNLOAD Jan 15 05:47:21.137000 audit[2513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135323539643864636264646366653366313161353165313633313563 Jan 15 05:47:21.137000 audit: BPF prog-id=97 op=LOAD Jan 15 05:47:21.137000 audit[2513]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2490 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135323539643864636264646366653366313161353165313633313563 Jan 15 05:47:21.199046 containerd[1600]: time="2026-01-15T05:47:21.196949443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e8c048af14328a28b4eeac7d5b7fbee1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f72dce97e8e74f61141ed9f99a2da678614a82c8e1ab50f8f9a69b4521b71523\"" Jan 15 05:47:21.200146 kubelet[2392]: E0115 05:47:21.200042 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:21.200622 containerd[1600]: time="2026-01-15T05:47:21.200518289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"be3fe15aad9400bda721704dc49cdad6b3dfa66c2f39f504f2e7fd24f78b1fc5\"" Jan 15 05:47:21.201918 kubelet[2392]: E0115 05:47:21.201829 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:21.202816 containerd[1600]: time="2026-01-15T05:47:21.202752135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"15259d8dcbddcfe3f11a51e16315cb9c7f8d2540b5076fe110997301264d848a\"" Jan 15 05:47:21.203659 kubelet[2392]: E0115 05:47:21.203589 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:21.205181 containerd[1600]: time="2026-01-15T05:47:21.205131552Z" level=info msg="CreateContainer within sandbox \"f72dce97e8e74f61141ed9f99a2da678614a82c8e1ab50f8f9a69b4521b71523\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 05:47:21.207657 containerd[1600]: time="2026-01-15T05:47:21.207542365Z" level=info msg="CreateContainer within sandbox \"be3fe15aad9400bda721704dc49cdad6b3dfa66c2f39f504f2e7fd24f78b1fc5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 05:47:21.213482 containerd[1600]: time="2026-01-15T05:47:21.213398102Z" level=info msg="Container b3931abc41d0231be920307f3b43c7bbd787305e8a64b7fdb0c9cf6385c6aa26: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:47:21.222602 containerd[1600]: time="2026-01-15T05:47:21.222524646Z" level=info msg="Container 7d645bb7d624e4831572aa4e2a93b3ea2f728e48aec1abe4c5f5e7767bd2ec71: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:47:21.226682 containerd[1600]: time="2026-01-15T05:47:21.226561985Z" level=info msg="CreateContainer within sandbox \"15259d8dcbddcfe3f11a51e16315cb9c7f8d2540b5076fe110997301264d848a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 05:47:21.233273 containerd[1600]: time="2026-01-15T05:47:21.233170009Z" level=info msg="CreateContainer within sandbox \"f72dce97e8e74f61141ed9f99a2da678614a82c8e1ab50f8f9a69b4521b71523\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3931abc41d0231be920307f3b43c7bbd787305e8a64b7fdb0c9cf6385c6aa26\"" Jan 15 05:47:21.234204 containerd[1600]: time="2026-01-15T05:47:21.234107206Z" level=info msg="StartContainer for \"b3931abc41d0231be920307f3b43c7bbd787305e8a64b7fdb0c9cf6385c6aa26\"" Jan 15 05:47:21.235941 containerd[1600]: time="2026-01-15T05:47:21.235589002Z" level=info msg="connecting to shim b3931abc41d0231be920307f3b43c7bbd787305e8a64b7fdb0c9cf6385c6aa26" address="unix:///run/containerd/s/f5363703a8fefed015eb51d9550df9ab9c0564e2eb652e70efd12affaa9653fc" protocol=ttrpc version=3 Jan 15 05:47:21.236717 containerd[1600]: time="2026-01-15T05:47:21.236640765Z" level=info msg="CreateContainer within sandbox \"be3fe15aad9400bda721704dc49cdad6b3dfa66c2f39f504f2e7fd24f78b1fc5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7d645bb7d624e4831572aa4e2a93b3ea2f728e48aec1abe4c5f5e7767bd2ec71\"" Jan 15 05:47:21.237681 containerd[1600]: time="2026-01-15T05:47:21.237644939Z" level=info msg="StartContainer for \"7d645bb7d624e4831572aa4e2a93b3ea2f728e48aec1abe4c5f5e7767bd2ec71\"" Jan 15 05:47:21.238656 kubelet[2392]: I0115 05:47:21.238608 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:47:21.239278 kubelet[2392]: E0115 05:47:21.239194 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 15 05:47:21.239786 containerd[1600]: time="2026-01-15T05:47:21.239715564Z" level=info msg="connecting to shim 7d645bb7d624e4831572aa4e2a93b3ea2f728e48aec1abe4c5f5e7767bd2ec71" address="unix:///run/containerd/s/b52836ede4cf5c9a72df913266881cf46f6fcd81902c32468bd318a9b925e1eb" protocol=ttrpc version=3 Jan 15 05:47:21.241777 containerd[1600]: time="2026-01-15T05:47:21.241699617Z" level=info msg="Container bee44f2eb0acbef5189e0937011a137c862be441ec73aed9295752e8bb4b8461: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:47:21.252779 containerd[1600]: time="2026-01-15T05:47:21.252699229Z" level=info msg="CreateContainer within sandbox \"15259d8dcbddcfe3f11a51e16315cb9c7f8d2540b5076fe110997301264d848a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bee44f2eb0acbef5189e0937011a137c862be441ec73aed9295752e8bb4b8461\"" Jan 15 05:47:21.253514 containerd[1600]: time="2026-01-15T05:47:21.253476820Z" level=info msg="StartContainer for \"bee44f2eb0acbef5189e0937011a137c862be441ec73aed9295752e8bb4b8461\"" Jan 15 05:47:21.256582 systemd[1]: Started cri-containerd-b3931abc41d0231be920307f3b43c7bbd787305e8a64b7fdb0c9cf6385c6aa26.scope - libcontainer container b3931abc41d0231be920307f3b43c7bbd787305e8a64b7fdb0c9cf6385c6aa26. Jan 15 05:47:21.262338 containerd[1600]: time="2026-01-15T05:47:21.262089035Z" level=info msg="connecting to shim bee44f2eb0acbef5189e0937011a137c862be441ec73aed9295752e8bb4b8461" address="unix:///run/containerd/s/8520fcac4b453a78b76b5d81849c4f6751300e54796d2d18e0ab8893f6c31209" protocol=ttrpc version=3 Jan 15 05:47:21.278333 systemd[1]: Started cri-containerd-7d645bb7d624e4831572aa4e2a93b3ea2f728e48aec1abe4c5f5e7767bd2ec71.scope - libcontainer container 7d645bb7d624e4831572aa4e2a93b3ea2f728e48aec1abe4c5f5e7767bd2ec71. Jan 15 05:47:21.292629 systemd[1]: Started cri-containerd-bee44f2eb0acbef5189e0937011a137c862be441ec73aed9295752e8bb4b8461.scope - libcontainer container bee44f2eb0acbef5189e0937011a137c862be441ec73aed9295752e8bb4b8461. Jan 15 05:47:21.295000 audit: BPF prog-id=98 op=LOAD Jan 15 05:47:21.296000 audit: BPF prog-id=99 op=LOAD Jan 15 05:47:21.296000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2451 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233393331616263343164303233316265393230333037663362343363 Jan 15 05:47:21.296000 audit: BPF prog-id=99 op=UNLOAD Jan 15 05:47:21.296000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233393331616263343164303233316265393230333037663362343363 Jan 15 05:47:21.296000 audit: BPF prog-id=100 op=LOAD Jan 15 05:47:21.296000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2451 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233393331616263343164303233316265393230333037663362343363 Jan 15 05:47:21.296000 audit: BPF prog-id=101 op=LOAD Jan 15 05:47:21.296000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2451 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233393331616263343164303233316265393230333037663362343363 Jan 15 05:47:21.297000 audit: BPF prog-id=101 op=UNLOAD Jan 15 05:47:21.297000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233393331616263343164303233316265393230333037663362343363 Jan 15 05:47:21.297000 audit: BPF prog-id=100 op=UNLOAD Jan 15 05:47:21.297000 audit[2567]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233393331616263343164303233316265393230333037663362343363 Jan 15 05:47:21.297000 audit: BPF prog-id=102 op=LOAD Jan 15 05:47:21.297000 audit[2567]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2451 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.297000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233393331616263343164303233316265393230333037663362343363 Jan 15 05:47:21.300000 audit: BPF prog-id=103 op=LOAD Jan 15 05:47:21.301000 audit: BPF prog-id=104 op=LOAD Jan 15 05:47:21.301000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2455 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764363435626237643632346534383331353732616134653261393362 Jan 15 05:47:21.301000 audit: BPF prog-id=104 op=UNLOAD Jan 15 05:47:21.301000 audit[2572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764363435626237643632346534383331353732616134653261393362 Jan 15 05:47:21.302000 audit: BPF prog-id=105 op=LOAD Jan 15 05:47:21.302000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2455 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764363435626237643632346534383331353732616134653261393362 Jan 15 05:47:21.302000 audit: BPF prog-id=106 op=LOAD Jan 15 05:47:21.302000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2455 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764363435626237643632346534383331353732616134653261393362 Jan 15 05:47:21.302000 audit: BPF prog-id=106 op=UNLOAD Jan 15 05:47:21.302000 audit[2572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764363435626237643632346534383331353732616134653261393362 Jan 15 05:47:21.302000 audit: BPF prog-id=105 op=UNLOAD Jan 15 05:47:21.302000 audit[2572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2455 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764363435626237643632346534383331353732616134653261393362 Jan 15 05:47:21.302000 audit: BPF prog-id=107 op=LOAD Jan 15 05:47:21.302000 audit[2572]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2455 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.302000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764363435626237643632346534383331353732616134653261393362 Jan 15 05:47:21.314000 audit: BPF prog-id=108 op=LOAD Jan 15 05:47:21.314000 audit: BPF prog-id=109 op=LOAD Jan 15 05:47:21.314000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2490 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653434663265623061636265663531383965303933373031316131 Jan 15 05:47:21.315000 audit: BPF prog-id=109 op=UNLOAD Jan 15 05:47:21.315000 audit[2598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653434663265623061636265663531383965303933373031316131 Jan 15 05:47:21.315000 audit: BPF prog-id=110 op=LOAD Jan 15 05:47:21.315000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2490 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653434663265623061636265663531383965303933373031316131 Jan 15 05:47:21.315000 audit: BPF prog-id=111 op=LOAD Jan 15 05:47:21.315000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2490 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653434663265623061636265663531383965303933373031316131 Jan 15 05:47:21.315000 audit: BPF prog-id=111 op=UNLOAD Jan 15 05:47:21.315000 audit[2598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653434663265623061636265663531383965303933373031316131 Jan 15 05:47:21.315000 audit: BPF prog-id=110 op=UNLOAD Jan 15 05:47:21.315000 audit[2598]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653434663265623061636265663531383965303933373031316131 Jan 15 05:47:21.315000 audit: BPF prog-id=112 op=LOAD Jan 15 05:47:21.315000 audit[2598]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2490 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:21.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653434663265623061636265663531383965303933373031316131 Jan 15 05:47:21.352892 containerd[1600]: time="2026-01-15T05:47:21.352660930Z" level=info msg="StartContainer for \"b3931abc41d0231be920307f3b43c7bbd787305e8a64b7fdb0c9cf6385c6aa26\" returns successfully" Jan 15 05:47:21.355545 kubelet[2392]: E0115 05:47:21.355311 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 15 05:47:21.366205 containerd[1600]: time="2026-01-15T05:47:21.366108353Z" level=info msg="StartContainer for \"7d645bb7d624e4831572aa4e2a93b3ea2f728e48aec1abe4c5f5e7767bd2ec71\" returns successfully" Jan 15 05:47:21.380411 containerd[1600]: time="2026-01-15T05:47:21.380142709Z" level=info msg="StartContainer for \"bee44f2eb0acbef5189e0937011a137c862be441ec73aed9295752e8bb4b8461\" returns successfully" Jan 15 05:47:21.509968 kubelet[2392]: E0115 05:47:21.509500 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:47:21.509968 kubelet[2392]: E0115 05:47:21.509712 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:21.532726 kubelet[2392]: E0115 05:47:21.532672 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:47:21.533731 kubelet[2392]: E0115 05:47:21.533680 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:21.539423 kubelet[2392]: E0115 05:47:21.539314 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:47:21.540066 kubelet[2392]: E0115 05:47:21.539989 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:22.041576 kubelet[2392]: I0115 05:47:22.041490 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:47:22.553709 kubelet[2392]: E0115 05:47:22.552800 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:47:22.554792 kubelet[2392]: E0115 05:47:22.554577 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:22.554828 kubelet[2392]: E0115 05:47:22.554795 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 15 05:47:22.557385 kubelet[2392]: E0115 05:47:22.555010 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:22.931493 kubelet[2392]: E0115 05:47:22.926598 2392 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 15 05:47:23.047006 kubelet[2392]: I0115 05:47:23.046214 2392 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 05:47:23.060424 kubelet[2392]: I0115 05:47:23.059533 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 05:47:23.126145 kubelet[2392]: E0115 05:47:23.125830 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 15 05:47:23.127243 kubelet[2392]: I0115 05:47:23.127029 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:23.129102 kubelet[2392]: E0115 05:47:23.129025 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:23.129102 kubelet[2392]: I0115 05:47:23.129060 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 05:47:23.134547 kubelet[2392]: E0115 05:47:23.134516 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 15 05:47:23.445938 kubelet[2392]: I0115 05:47:23.441127 2392 apiserver.go:52] "Watching apiserver" Jan 15 05:47:23.561286 kubelet[2392]: I0115 05:47:23.560187 2392 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 05:47:23.635460 kubelet[2392]: I0115 05:47:23.565285 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 05:47:23.640080 kubelet[2392]: E0115 05:47:23.639608 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 15 05:47:23.640999 kubelet[2392]: E0115 05:47:23.640139 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:26.356641 kubelet[2392]: I0115 05:47:26.354225 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:26.484777 kubelet[2392]: E0115 05:47:26.473958 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:26.776395 kubelet[2392]: E0115 05:47:26.633201 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:28.231036 kubelet[2392]: I0115 05:47:28.230717 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 05:47:28.253711 kubelet[2392]: E0115 05:47:28.253451 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:28.668060 kubelet[2392]: E0115 05:47:28.667964 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:30.565600 kubelet[2392]: I0115 05:47:30.565404 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.565333694 podStartE2EDuration="2.565333694s" podCreationTimestamp="2026-01-15 05:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:47:30.565094924 +0000 UTC m=+10.747997517" watchObservedRunningTime="2026-01-15 05:47:30.565333694 +0000 UTC m=+10.748236288" Jan 15 05:47:30.828237 systemd[1]: Reload requested from client PID 2676 ('systemctl') (unit session-8.scope)... Jan 15 05:47:30.828312 systemd[1]: Reloading... Jan 15 05:47:31.046521 zram_generator::config[2721]: No configuration found. Jan 15 05:47:31.417433 systemd[1]: Reloading finished in 588 ms. Jan 15 05:47:31.463421 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:47:31.483689 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 05:47:31.484794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:47:31.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:31.485107 systemd[1]: kubelet.service: Consumed 2.969s CPU time, 131.4M memory peak. Jan 15 05:47:31.487652 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 15 05:47:31.487883 kernel: audit: type=1131 audit(1768456051.483:388): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:31.491168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 05:47:31.507750 kernel: audit: type=1334 audit(1768456051.493:389): prog-id=113 op=LOAD Jan 15 05:47:31.493000 audit: BPF prog-id=113 op=LOAD Jan 15 05:47:31.493000 audit: BPF prog-id=80 op=UNLOAD Jan 15 05:47:31.511735 kernel: audit: type=1334 audit(1768456051.493:390): prog-id=80 op=UNLOAD Jan 15 05:47:31.512031 kernel: audit: type=1334 audit(1768456051.493:391): prog-id=114 op=LOAD Jan 15 05:47:31.493000 audit: BPF prog-id=114 op=LOAD Jan 15 05:47:31.493000 audit: BPF prog-id=115 op=LOAD Jan 15 05:47:31.518226 kernel: audit: type=1334 audit(1768456051.493:392): prog-id=115 op=LOAD Jan 15 05:47:31.518315 kernel: audit: type=1334 audit(1768456051.493:393): prog-id=81 op=UNLOAD Jan 15 05:47:31.493000 audit: BPF prog-id=81 op=UNLOAD Jan 15 05:47:31.493000 audit: BPF prog-id=82 op=UNLOAD Jan 15 05:47:31.524584 kernel: audit: type=1334 audit(1768456051.493:394): prog-id=82 op=UNLOAD Jan 15 05:47:31.524629 kernel: audit: type=1334 audit(1768456051.493:395): prog-id=116 op=LOAD Jan 15 05:47:31.493000 audit: BPF prog-id=116 op=LOAD Jan 15 05:47:31.493000 audit: BPF prog-id=117 op=LOAD Jan 15 05:47:31.530241 kernel: audit: type=1334 audit(1768456051.493:396): prog-id=117 op=LOAD Jan 15 05:47:31.530310 kernel: audit: type=1334 audit(1768456051.493:397): prog-id=64 op=UNLOAD Jan 15 05:47:31.493000 audit: BPF prog-id=64 op=UNLOAD Jan 15 05:47:31.493000 audit: BPF prog-id=65 op=UNLOAD Jan 15 05:47:31.496000 audit: BPF prog-id=118 op=LOAD Jan 15 05:47:31.496000 audit: BPF prog-id=73 op=UNLOAD Jan 15 05:47:31.496000 audit: BPF prog-id=119 op=LOAD Jan 15 05:47:31.496000 audit: BPF prog-id=120 op=LOAD Jan 15 05:47:31.496000 audit: BPF prog-id=74 op=UNLOAD Jan 15 05:47:31.496000 audit: BPF prog-id=75 op=UNLOAD Jan 15 05:47:31.497000 audit: BPF prog-id=121 op=LOAD Jan 15 05:47:31.497000 audit: BPF prog-id=66 op=UNLOAD Jan 15 05:47:31.498000 audit: BPF prog-id=122 op=LOAD Jan 15 05:47:31.498000 audit: BPF prog-id=123 op=LOAD Jan 15 05:47:31.498000 audit: BPF prog-id=67 op=UNLOAD Jan 15 05:47:31.498000 audit: BPF prog-id=68 op=UNLOAD Jan 15 05:47:31.506000 audit: BPF prog-id=124 op=LOAD Jan 15 05:47:31.506000 audit: BPF prog-id=69 op=UNLOAD Jan 15 05:47:31.539000 audit: BPF prog-id=125 op=LOAD Jan 15 05:47:31.539000 audit: BPF prog-id=63 op=UNLOAD Jan 15 05:47:31.543000 audit: BPF prog-id=126 op=LOAD Jan 15 05:47:31.543000 audit: BPF prog-id=77 op=UNLOAD Jan 15 05:47:31.543000 audit: BPF prog-id=127 op=LOAD Jan 15 05:47:31.543000 audit: BPF prog-id=128 op=LOAD Jan 15 05:47:31.543000 audit: BPF prog-id=78 op=UNLOAD Jan 15 05:47:31.543000 audit: BPF prog-id=79 op=UNLOAD Jan 15 05:47:31.544000 audit: BPF prog-id=129 op=LOAD Jan 15 05:47:31.545000 audit: BPF prog-id=70 op=UNLOAD Jan 15 05:47:31.545000 audit: BPF prog-id=130 op=LOAD Jan 15 05:47:31.545000 audit: BPF prog-id=131 op=LOAD Jan 15 05:47:31.545000 audit: BPF prog-id=71 op=UNLOAD Jan 15 05:47:31.545000 audit: BPF prog-id=72 op=UNLOAD Jan 15 05:47:31.549000 audit: BPF prog-id=132 op=LOAD Jan 15 05:47:31.549000 audit: BPF prog-id=76 op=UNLOAD Jan 15 05:47:31.894900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 05:47:31.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:31.923139 (kubelet)[2767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 05:47:32.000881 kubelet[2767]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 05:47:32.008797 kubelet[2767]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 05:47:32.008797 kubelet[2767]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 05:47:32.008797 kubelet[2767]: I0115 05:47:32.004811 2767 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 05:47:32.016986 kubelet[2767]: I0115 05:47:32.016884 2767 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 15 05:47:32.016986 kubelet[2767]: I0115 05:47:32.016934 2767 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 05:47:32.017244 kubelet[2767]: I0115 05:47:32.017234 2767 server.go:956] "Client rotation is on, will bootstrap in background" Jan 15 05:47:32.022465 kubelet[2767]: I0115 05:47:32.022170 2767 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 15 05:47:32.026831 kubelet[2767]: I0115 05:47:32.026792 2767 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 05:47:32.034707 kubelet[2767]: I0115 05:47:32.034565 2767 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 05:47:32.049918 kubelet[2767]: I0115 05:47:32.049741 2767 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 05:47:32.050477 kubelet[2767]: I0115 05:47:32.050157 2767 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 05:47:32.050477 kubelet[2767]: I0115 05:47:32.050199 2767 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 05:47:32.050477 kubelet[2767]: I0115 05:47:32.050467 2767 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 05:47:32.050477 kubelet[2767]: I0115 05:47:32.050479 2767 container_manager_linux.go:303] "Creating device plugin manager" Jan 15 05:47:32.050852 kubelet[2767]: I0115 05:47:32.050546 2767 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:47:32.051867 kubelet[2767]: I0115 05:47:32.050882 2767 kubelet.go:480] "Attempting to sync node with API server" Jan 15 05:47:32.051867 kubelet[2767]: I0115 05:47:32.050895 2767 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 05:47:32.051867 kubelet[2767]: I0115 05:47:32.050920 2767 kubelet.go:386] "Adding apiserver pod source" Jan 15 05:47:32.051867 kubelet[2767]: I0115 05:47:32.050935 2767 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 05:47:32.052630 kubelet[2767]: I0115 05:47:32.052602 2767 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 15 05:47:32.053161 kubelet[2767]: I0115 05:47:32.053138 2767 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 15 05:47:32.082403 kubelet[2767]: I0115 05:47:32.082322 2767 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 05:47:32.082842 kubelet[2767]: I0115 05:47:32.082441 2767 server.go:1289] "Started kubelet" Jan 15 05:47:32.083488 kubelet[2767]: I0115 05:47:32.083434 2767 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 05:47:32.086415 kubelet[2767]: I0115 05:47:32.085399 2767 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 05:47:32.086415 kubelet[2767]: I0115 05:47:32.086397 2767 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 05:47:32.086775 kubelet[2767]: I0115 05:47:32.086680 2767 server.go:317] "Adding debug handlers to kubelet server" Jan 15 05:47:32.087823 kubelet[2767]: I0115 05:47:32.087751 2767 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 05:47:32.088285 kubelet[2767]: I0115 05:47:32.088084 2767 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 05:47:32.090424 kubelet[2767]: I0115 05:47:32.090001 2767 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 05:47:32.090424 kubelet[2767]: I0115 05:47:32.090106 2767 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 05:47:32.090424 kubelet[2767]: I0115 05:47:32.090266 2767 reconciler.go:26] "Reconciler: start to sync state" Jan 15 05:47:32.093750 kubelet[2767]: E0115 05:47:32.092501 2767 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 05:47:32.101487 kubelet[2767]: I0115 05:47:32.099741 2767 factory.go:223] Registration of the containerd container factory successfully Jan 15 05:47:32.101487 kubelet[2767]: I0115 05:47:32.099771 2767 factory.go:223] Registration of the systemd container factory successfully Jan 15 05:47:32.101487 kubelet[2767]: I0115 05:47:32.100084 2767 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 05:47:32.140546 kubelet[2767]: I0115 05:47:32.140223 2767 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 15 05:47:32.145306 kubelet[2767]: I0115 05:47:32.145108 2767 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 15 05:47:32.145306 kubelet[2767]: I0115 05:47:32.145222 2767 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 15 05:47:32.145306 kubelet[2767]: I0115 05:47:32.145251 2767 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 05:47:32.145306 kubelet[2767]: I0115 05:47:32.145261 2767 kubelet.go:2436] "Starting kubelet main sync loop" Jan 15 05:47:32.150898 kubelet[2767]: E0115 05:47:32.150562 2767 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 05:47:32.230394 kubelet[2767]: I0115 05:47:32.230106 2767 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 05:47:32.230394 kubelet[2767]: I0115 05:47:32.230142 2767 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 05:47:32.231109 kubelet[2767]: I0115 05:47:32.230503 2767 state_mem.go:36] "Initialized new in-memory state store" Jan 15 05:47:32.231109 kubelet[2767]: I0115 05:47:32.230747 2767 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 05:47:32.231109 kubelet[2767]: I0115 05:47:32.230758 2767 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 05:47:32.231109 kubelet[2767]: I0115 05:47:32.230774 2767 policy_none.go:49] "None policy: Start" Jan 15 05:47:32.231109 kubelet[2767]: I0115 05:47:32.230788 2767 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 05:47:32.231109 kubelet[2767]: I0115 05:47:32.230801 2767 state_mem.go:35] "Initializing new in-memory state store" Jan 15 05:47:32.231109 kubelet[2767]: I0115 05:47:32.230894 2767 state_mem.go:75] "Updated machine memory state" Jan 15 05:47:32.240623 kubelet[2767]: E0115 05:47:32.240467 2767 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 15 05:47:32.241168 kubelet[2767]: I0115 05:47:32.240722 2767 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 05:47:32.241168 kubelet[2767]: I0115 05:47:32.240737 2767 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 05:47:32.242202 kubelet[2767]: I0115 05:47:32.242148 2767 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 05:47:32.246623 kubelet[2767]: E0115 05:47:32.246555 2767 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 05:47:32.252096 kubelet[2767]: I0115 05:47:32.251841 2767 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 15 05:47:32.252591 kubelet[2767]: I0115 05:47:32.252292 2767 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 15 05:47:32.254787 kubelet[2767]: I0115 05:47:32.254646 2767 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:32.291925 kubelet[2767]: I0115 05:47:32.291584 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8c048af14328a28b4eeac7d5b7fbee1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e8c048af14328a28b4eeac7d5b7fbee1\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:47:32.291925 kubelet[2767]: I0115 05:47:32.291688 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:32.291925 kubelet[2767]: I0115 05:47:32.291729 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:32.291925 kubelet[2767]: I0115 05:47:32.291760 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 15 05:47:32.291925 kubelet[2767]: I0115 05:47:32.291783 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8c048af14328a28b4eeac7d5b7fbee1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e8c048af14328a28b4eeac7d5b7fbee1\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:47:32.293420 kubelet[2767]: I0115 05:47:32.291802 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8c048af14328a28b4eeac7d5b7fbee1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e8c048af14328a28b4eeac7d5b7fbee1\") " pod="kube-system/kube-apiserver-localhost" Jan 15 05:47:32.293420 kubelet[2767]: I0115 05:47:32.291821 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:32.293420 kubelet[2767]: I0115 05:47:32.291845 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:32.293420 kubelet[2767]: I0115 05:47:32.291872 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:32.296903 kubelet[2767]: E0115 05:47:32.296709 2767 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 15 05:47:32.316058 kubelet[2767]: E0115 05:47:32.315795 2767 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 15 05:47:32.355722 kubelet[2767]: I0115 05:47:32.355500 2767 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 15 05:47:32.397079 kubelet[2767]: I0115 05:47:32.395907 2767 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 15 05:47:32.397079 kubelet[2767]: I0115 05:47:32.396171 2767 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 15 05:47:32.597679 kubelet[2767]: E0115 05:47:32.597566 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:32.608467 kubelet[2767]: E0115 05:47:32.608387 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:32.616562 kubelet[2767]: E0115 05:47:32.616520 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:33.052598 kubelet[2767]: I0115 05:47:33.052488 2767 apiserver.go:52] "Watching apiserver" Jan 15 05:47:33.090800 kubelet[2767]: I0115 05:47:33.090711 2767 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 05:47:33.193724 kubelet[2767]: E0115 05:47:33.193464 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:33.194456 kubelet[2767]: E0115 05:47:33.194051 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:33.194456 kubelet[2767]: E0115 05:47:33.194169 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:34.143824 kubelet[2767]: I0115 05:47:34.143641 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.143623531 podStartE2EDuration="2.143623531s" podCreationTimestamp="2026-01-15 05:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:47:33.758538979 +0000 UTC m=+1.820636384" watchObservedRunningTime="2026-01-15 05:47:34.143623531 +0000 UTC m=+2.205720915" Jan 15 05:47:34.202766 kubelet[2767]: E0115 05:47:34.202580 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:34.202766 kubelet[2767]: E0115 05:47:34.202598 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:34.202766 kubelet[2767]: E0115 05:47:34.202651 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:35.209295 kubelet[2767]: E0115 05:47:35.209057 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:35.209295 kubelet[2767]: E0115 05:47:35.209282 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:35.316332 kubelet[2767]: I0115 05:47:35.316147 2767 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 05:47:35.317578 kubelet[2767]: I0115 05:47:35.317276 2767 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 05:47:35.317630 containerd[1600]: time="2026-01-15T05:47:35.316900678Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 05:47:36.021790 systemd[1]: Created slice kubepods-besteffort-pod23b96585_4ff6_4433_b5a3_99d452a2f88c.slice - libcontainer container kubepods-besteffort-pod23b96585_4ff6_4433_b5a3_99d452a2f88c.slice. Jan 15 05:47:36.032260 kubelet[2767]: I0115 05:47:36.032159 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23b96585-4ff6-4433-b5a3-99d452a2f88c-xtables-lock\") pod \"kube-proxy-lx9t4\" (UID: \"23b96585-4ff6-4433-b5a3-99d452a2f88c\") " pod="kube-system/kube-proxy-lx9t4" Jan 15 05:47:36.032477 kubelet[2767]: I0115 05:47:36.032269 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23b96585-4ff6-4433-b5a3-99d452a2f88c-lib-modules\") pod \"kube-proxy-lx9t4\" (UID: \"23b96585-4ff6-4433-b5a3-99d452a2f88c\") " pod="kube-system/kube-proxy-lx9t4" Jan 15 05:47:36.032477 kubelet[2767]: I0115 05:47:36.032307 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr8p8\" (UniqueName: \"kubernetes.io/projected/23b96585-4ff6-4433-b5a3-99d452a2f88c-kube-api-access-gr8p8\") pod \"kube-proxy-lx9t4\" (UID: \"23b96585-4ff6-4433-b5a3-99d452a2f88c\") " pod="kube-system/kube-proxy-lx9t4" Jan 15 05:47:36.032585 kubelet[2767]: I0115 05:47:36.032336 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23b96585-4ff6-4433-b5a3-99d452a2f88c-kube-proxy\") pod \"kube-proxy-lx9t4\" (UID: \"23b96585-4ff6-4433-b5a3-99d452a2f88c\") " pod="kube-system/kube-proxy-lx9t4" Jan 15 05:47:36.212897 systemd[1]: Created slice kubepods-besteffort-pod686f9ce3_6e26_4613_aa04_fc6303d97cda.slice - libcontainer container kubepods-besteffort-pod686f9ce3_6e26_4613_aa04_fc6303d97cda.slice. Jan 15 05:47:36.234549 kubelet[2767]: I0115 05:47:36.234395 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/686f9ce3-6e26-4613-aa04-fc6303d97cda-var-lib-calico\") pod \"tigera-operator-7dcd859c48-5f7nv\" (UID: \"686f9ce3-6e26-4613-aa04-fc6303d97cda\") " pod="tigera-operator/tigera-operator-7dcd859c48-5f7nv" Jan 15 05:47:36.234549 kubelet[2767]: I0115 05:47:36.234475 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgnnk\" (UniqueName: \"kubernetes.io/projected/686f9ce3-6e26-4613-aa04-fc6303d97cda-kube-api-access-sgnnk\") pod \"tigera-operator-7dcd859c48-5f7nv\" (UID: \"686f9ce3-6e26-4613-aa04-fc6303d97cda\") " pod="tigera-operator/tigera-operator-7dcd859c48-5f7nv" Jan 15 05:47:36.344138 kubelet[2767]: E0115 05:47:36.344025 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:36.345419 containerd[1600]: time="2026-01-15T05:47:36.345305779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lx9t4,Uid:23b96585-4ff6-4433-b5a3-99d452a2f88c,Namespace:kube-system,Attempt:0,}" Jan 15 05:47:36.406182 containerd[1600]: time="2026-01-15T05:47:36.405704755Z" level=info msg="connecting to shim d69203f5881e17a6480486cf73813f9a7c30f30425b08fe40bc8d9bd7c4b2134" address="unix:///run/containerd/s/5ac2c1e0bc7ce8c2f7aed93e5b9091730b9ee8801ecd0efe8da09bcee80d44b0" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:47:36.472783 systemd[1]: Started cri-containerd-d69203f5881e17a6480486cf73813f9a7c30f30425b08fe40bc8d9bd7c4b2134.scope - libcontainer container d69203f5881e17a6480486cf73813f9a7c30f30425b08fe40bc8d9bd7c4b2134. Jan 15 05:47:36.490000 audit: BPF prog-id=133 op=LOAD Jan 15 05:47:36.492792 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 15 05:47:36.492871 kernel: audit: type=1334 audit(1768456056.490:430): prog-id=133 op=LOAD Jan 15 05:47:36.491000 audit: BPF prog-id=134 op=LOAD Jan 15 05:47:36.497048 kernel: audit: type=1334 audit(1768456056.491:431): prog-id=134 op=LOAD Jan 15 05:47:36.497097 kernel: audit: type=1300 audit(1768456056.491:431): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2831 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.491000 audit[2842]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2831 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.525966 kernel: audit: type=1327 audit(1768456056.491:431): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393230336635383831653137613634383034383663663733383133 Jan 15 05:47:36.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393230336635383831653137613634383034383663663733383133 Jan 15 05:47:36.526471 containerd[1600]: time="2026-01-15T05:47:36.520941598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5f7nv,Uid:686f9ce3-6e26-4613-aa04-fc6303d97cda,Namespace:tigera-operator,Attempt:0,}" Jan 15 05:47:36.491000 audit: BPF prog-id=134 op=UNLOAD Jan 15 05:47:36.491000 audit[2842]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2831 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.539709 kernel: audit: type=1334 audit(1768456056.491:432): prog-id=134 op=UNLOAD Jan 15 05:47:36.540336 kernel: audit: type=1300 audit(1768456056.491:432): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2831 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393230336635383831653137613634383034383663663733383133 Jan 15 05:47:36.491000 audit: BPF prog-id=135 op=LOAD Jan 15 05:47:36.550978 kernel: audit: type=1327 audit(1768456056.491:432): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393230336635383831653137613634383034383663663733383133 Jan 15 05:47:36.551023 kernel: audit: type=1334 audit(1768456056.491:433): prog-id=135 op=LOAD Jan 15 05:47:36.491000 audit[2842]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2831 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.558567 kubelet[2767]: E0115 05:47:36.557273 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:36.558660 containerd[1600]: time="2026-01-15T05:47:36.555616776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lx9t4,Uid:23b96585-4ff6-4433-b5a3-99d452a2f88c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d69203f5881e17a6480486cf73813f9a7c30f30425b08fe40bc8d9bd7c4b2134\"" Jan 15 05:47:36.560449 kernel: audit: type=1300 audit(1768456056.491:433): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2831 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393230336635383831653137613634383034383663663733383133 Jan 15 05:47:36.564434 containerd[1600]: time="2026-01-15T05:47:36.562805757Z" level=info msg="connecting to shim c4cff8ff53023c627db521b0dc2aa914f33d8b6d96eff0fc6fab0ac1eeffc761" address="unix:///run/containerd/s/31cdaa86261d13f42355d8ab1de66e5a7a194a0fba526f60dd35d871a721771b" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:47:36.565760 containerd[1600]: time="2026-01-15T05:47:36.565713837Z" level=info msg="CreateContainer within sandbox \"d69203f5881e17a6480486cf73813f9a7c30f30425b08fe40bc8d9bd7c4b2134\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 05:47:36.491000 audit: BPF prog-id=136 op=LOAD Jan 15 05:47:36.491000 audit[2842]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2831 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393230336635383831653137613634383034383663663733383133 Jan 15 05:47:36.491000 audit: BPF prog-id=136 op=UNLOAD Jan 15 05:47:36.491000 audit[2842]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2831 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393230336635383831653137613634383034383663663733383133 Jan 15 05:47:36.491000 audit: BPF prog-id=135 op=UNLOAD Jan 15 05:47:36.491000 audit[2842]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2831 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393230336635383831653137613634383034383663663733383133 Jan 15 05:47:36.491000 audit: BPF prog-id=137 op=LOAD Jan 15 05:47:36.491000 audit[2842]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2831 pid=2842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393230336635383831653137613634383034383663663733383133 Jan 15 05:47:36.570426 kernel: audit: type=1327 audit(1768456056.491:433): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393230336635383831653137613634383034383663663733383133 Jan 15 05:47:36.585428 containerd[1600]: time="2026-01-15T05:47:36.584962876Z" level=info msg="Container 679a57a6209c951c6866dbf078ce5ddb5e8ad7b805cc85950b3d5566c146ff55: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:47:36.593411 containerd[1600]: time="2026-01-15T05:47:36.593308949Z" level=info msg="CreateContainer within sandbox \"d69203f5881e17a6480486cf73813f9a7c30f30425b08fe40bc8d9bd7c4b2134\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"679a57a6209c951c6866dbf078ce5ddb5e8ad7b805cc85950b3d5566c146ff55\"" Jan 15 05:47:36.596202 containerd[1600]: time="2026-01-15T05:47:36.595575070Z" level=info msg="StartContainer for \"679a57a6209c951c6866dbf078ce5ddb5e8ad7b805cc85950b3d5566c146ff55\"" Jan 15 05:47:36.597094 containerd[1600]: time="2026-01-15T05:47:36.597055700Z" level=info msg="connecting to shim 679a57a6209c951c6866dbf078ce5ddb5e8ad7b805cc85950b3d5566c146ff55" address="unix:///run/containerd/s/5ac2c1e0bc7ce8c2f7aed93e5b9091730b9ee8801ecd0efe8da09bcee80d44b0" protocol=ttrpc version=3 Jan 15 05:47:36.601728 systemd[1]: Started cri-containerd-c4cff8ff53023c627db521b0dc2aa914f33d8b6d96eff0fc6fab0ac1eeffc761.scope - libcontainer container c4cff8ff53023c627db521b0dc2aa914f33d8b6d96eff0fc6fab0ac1eeffc761. Jan 15 05:47:36.644724 systemd[1]: Started cri-containerd-679a57a6209c951c6866dbf078ce5ddb5e8ad7b805cc85950b3d5566c146ff55.scope - libcontainer container 679a57a6209c951c6866dbf078ce5ddb5e8ad7b805cc85950b3d5566c146ff55. Jan 15 05:47:36.648000 audit: BPF prog-id=138 op=LOAD Jan 15 05:47:36.649000 audit: BPF prog-id=139 op=LOAD Jan 15 05:47:36.649000 audit[2888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334636666386666353330323363363237646235323162306463326161 Jan 15 05:47:36.649000 audit: BPF prog-id=139 op=UNLOAD Jan 15 05:47:36.649000 audit[2888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334636666386666353330323363363237646235323162306463326161 Jan 15 05:47:36.649000 audit: BPF prog-id=140 op=LOAD Jan 15 05:47:36.649000 audit[2888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334636666386666353330323363363237646235323162306463326161 Jan 15 05:47:36.650000 audit: BPF prog-id=141 op=LOAD Jan 15 05:47:36.650000 audit[2888]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334636666386666353330323363363237646235323162306463326161 Jan 15 05:47:36.650000 audit: BPF prog-id=141 op=UNLOAD Jan 15 05:47:36.650000 audit[2888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334636666386666353330323363363237646235323162306463326161 Jan 15 05:47:36.650000 audit: BPF prog-id=140 op=UNLOAD Jan 15 05:47:36.650000 audit[2888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334636666386666353330323363363237646235323162306463326161 Jan 15 05:47:36.650000 audit: BPF prog-id=142 op=LOAD Jan 15 05:47:36.650000 audit[2888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334636666386666353330323363363237646235323162306463326161 Jan 15 05:47:36.699276 containerd[1600]: time="2026-01-15T05:47:36.698997665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5f7nv,Uid:686f9ce3-6e26-4613-aa04-fc6303d97cda,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c4cff8ff53023c627db521b0dc2aa914f33d8b6d96eff0fc6fab0ac1eeffc761\"" Jan 15 05:47:36.705819 containerd[1600]: time="2026-01-15T05:47:36.705692835Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 15 05:47:36.743000 audit: BPF prog-id=143 op=LOAD Jan 15 05:47:36.743000 audit[2900]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2831 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396135376136323039633935316336383636646266303738636535 Jan 15 05:47:36.743000 audit: BPF prog-id=144 op=LOAD Jan 15 05:47:36.743000 audit[2900]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2831 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396135376136323039633935316336383636646266303738636535 Jan 15 05:47:36.743000 audit: BPF prog-id=144 op=UNLOAD Jan 15 05:47:36.743000 audit[2900]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2831 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396135376136323039633935316336383636646266303738636535 Jan 15 05:47:36.743000 audit: BPF prog-id=143 op=UNLOAD Jan 15 05:47:36.743000 audit[2900]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2831 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396135376136323039633935316336383636646266303738636535 Jan 15 05:47:36.743000 audit: BPF prog-id=145 op=LOAD Jan 15 05:47:36.743000 audit[2900]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2831 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:36.743000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637396135376136323039633935316336383636646266303738636535 Jan 15 05:47:36.776662 containerd[1600]: time="2026-01-15T05:47:36.776487148Z" level=info msg="StartContainer for \"679a57a6209c951c6866dbf078ce5ddb5e8ad7b805cc85950b3d5566c146ff55\" returns successfully" Jan 15 05:47:37.010000 audit[2982]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=2982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.010000 audit[2982]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd446bf2b0 a2=0 a3=7ffd446bf29c items=0 ppid=2920 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 05:47:37.014000 audit[2983]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=2983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.014000 audit[2983]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc59965290 a2=0 a3=7ffc5996527c items=0 ppid=2920 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.014000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 15 05:47:37.015000 audit[2984]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.015000 audit[2984]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca21ab0b0 a2=0 a3=7ffca21ab09c items=0 ppid=2920 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.015000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 05:47:37.017000 audit[2988]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.017000 audit[2988]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef2726ae0 a2=0 a3=7ffef2726acc items=0 ppid=2920 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 15 05:47:37.019000 audit[2989]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=2989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.019000 audit[2989]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc1e98420 a2=0 a3=7ffdc1e9840c items=0 ppid=2920 pid=2989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 05:47:37.022000 audit[2991]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.022000 audit[2991]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc852d62a0 a2=0 a3=7ffc852d628c items=0 ppid=2920 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 15 05:47:37.116000 audit[2993]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.116000 audit[2993]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd16811840 a2=0 a3=7ffd1681182c items=0 ppid=2920 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.116000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 05:47:37.124000 audit[2995]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.124000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff03075a50 a2=0 a3=7fff03075a3c items=0 ppid=2920 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.124000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 15 05:47:37.132000 audit[2998]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.132000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff983138c0 a2=0 a3=7fff983138ac items=0 ppid=2920 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 15 05:47:37.134000 audit[2999]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.134000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc990e1fd0 a2=0 a3=7ffc990e1fbc items=0 ppid=2920 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.134000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 05:47:37.141000 audit[3001]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.141000 audit[3001]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8bf12f50 a2=0 a3=7fff8bf12f3c items=0 ppid=2920 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.141000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 05:47:37.143000 audit[3002]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3002 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.143000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd444778f0 a2=0 a3=7ffd444778dc items=0 ppid=2920 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 05:47:37.149000 audit[3004]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.149000 audit[3004]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffca5de21a0 a2=0 a3=7ffca5de218c items=0 ppid=2920 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 15 05:47:37.157000 audit[3007]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.157000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb0db2700 a2=0 a3=7fffb0db26ec items=0 ppid=2920 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 15 05:47:37.161000 audit[3008]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.161000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce8a93900 a2=0 a3=7ffce8a938ec items=0 ppid=2920 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 05:47:37.166000 audit[3010]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.166000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff70826ca0 a2=0 a3=7fff70826c8c items=0 ppid=2920 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.166000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 05:47:37.169000 audit[3011]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.169000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeeb79cd80 a2=0 a3=7ffeeb79cd6c items=0 ppid=2920 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.169000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 05:47:37.174000 audit[3013]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.174000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef21fd700 a2=0 a3=7ffef21fd6ec items=0 ppid=2920 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 05:47:37.182000 audit[3016]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.182000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2b503350 a2=0 a3=7fff2b50333c items=0 ppid=2920 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.182000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 05:47:37.189000 audit[3019]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.189000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd423e4a70 a2=0 a3=7ffd423e4a5c items=0 ppid=2920 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 15 05:47:37.192000 audit[3020]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.192000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdf41f74d0 a2=0 a3=7ffdf41f74bc items=0 ppid=2920 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 05:47:37.196000 audit[3022]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.196000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff4ad8c0c0 a2=0 a3=7fff4ad8c0ac items=0 ppid=2920 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.196000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:47:37.210000 audit[3025]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.210000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea4080870 a2=0 a3=7ffea408085c items=0 ppid=2920 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:47:37.212000 audit[3026]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.212000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8e766130 a2=0 a3=7fff8e76611c items=0 ppid=2920 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.212000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 05:47:37.218164 kubelet[2767]: E0115 05:47:37.218128 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:37.218000 audit[3028]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 15 05:47:37.218000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffbc1545d0 a2=0 a3=7fffbc1545bc items=0 ppid=2920 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.218000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 05:47:37.232842 kubelet[2767]: I0115 05:47:37.232742 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lx9t4" podStartSLOduration=2.232653463 podStartE2EDuration="2.232653463s" podCreationTimestamp="2026-01-15 05:47:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:47:37.231179288 +0000 UTC m=+5.293276683" watchObservedRunningTime="2026-01-15 05:47:37.232653463 +0000 UTC m=+5.294750868" Jan 15 05:47:37.256000 audit[3034]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:37.256000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcdd429f80 a2=0 a3=7ffcdd429f6c items=0 ppid=2920 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:37.267000 audit[3034]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:37.267000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcdd429f80 a2=0 a3=7ffcdd429f6c items=0 ppid=2920 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.267000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:37.270000 audit[3039]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.270000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe5f9c34d0 a2=0 a3=7ffe5f9c34bc items=0 ppid=2920 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 15 05:47:37.276000 audit[3041]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.276000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff29e38990 a2=0 a3=7fff29e3897c items=0 ppid=2920 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 15 05:47:37.284000 audit[3044]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.284000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdc471c290 a2=0 a3=7ffdc471c27c items=0 ppid=2920 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 15 05:47:37.287000 audit[3045]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.287000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6e27cd40 a2=0 a3=7ffd6e27cd2c items=0 ppid=2920 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.287000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 15 05:47:37.292000 audit[3047]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.292000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff084dd9d0 a2=0 a3=7fff084dd9bc items=0 ppid=2920 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 15 05:47:37.295000 audit[3048]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.295000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc1b927f0 a2=0 a3=7ffdc1b927dc items=0 ppid=2920 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.295000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 15 05:47:37.301000 audit[3050]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.301000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc0d315700 a2=0 a3=7ffc0d3156ec items=0 ppid=2920 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.301000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 15 05:47:37.311000 audit[3053]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.311000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd77457700 a2=0 a3=7ffd774576ec items=0 ppid=2920 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 15 05:47:37.313000 audit[3054]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.313000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd9d0cda0 a2=0 a3=7ffdd9d0cd8c items=0 ppid=2920 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.313000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 15 05:47:37.318000 audit[3056]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.318000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffae05e60 a2=0 a3=7ffffae05e4c items=0 ppid=2920 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 15 05:47:37.322000 audit[3057]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.322000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca4f25000 a2=0 a3=7ffca4f24fec items=0 ppid=2920 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.322000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 15 05:47:37.327000 audit[3059]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.327000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe408c6c20 a2=0 a3=7ffe408c6c0c items=0 ppid=2920 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 15 05:47:37.335000 audit[3062]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.335000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff5abacec0 a2=0 a3=7fff5abaceac items=0 ppid=2920 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.335000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 15 05:47:37.345000 audit[3065]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.345000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed9d435e0 a2=0 a3=7ffed9d435cc items=0 ppid=2920 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.345000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 15 05:47:37.347000 audit[3066]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.347000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff9234a040 a2=0 a3=7fff9234a02c items=0 ppid=2920 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 15 05:47:37.352000 audit[3068]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.352000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd2746c250 a2=0 a3=7ffd2746c23c items=0 ppid=2920 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.352000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:47:37.360000 audit[3071]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.360000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffed02c64d0 a2=0 a3=7ffed02c64bc items=0 ppid=2920 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 15 05:47:37.362000 audit[3072]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.362000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee2430be0 a2=0 a3=7ffee2430bcc items=0 ppid=2920 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.362000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 15 05:47:37.368000 audit[3074]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.368000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd5a43baa0 a2=0 a3=7ffd5a43ba8c items=0 ppid=2920 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.368000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 15 05:47:37.370000 audit[3075]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.370000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef1729a00 a2=0 a3=7ffef17299ec items=0 ppid=2920 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 15 05:47:37.375000 audit[3077]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.375000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd2ef065b0 a2=0 a3=7ffd2ef0659c items=0 ppid=2920 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:47:37.382000 audit[3080]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 15 05:47:37.382000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd02fda4c0 a2=0 a3=7ffd02fda4ac items=0 ppid=2920 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 15 05:47:37.389000 audit[3082]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 05:47:37.389000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffea2890040 a2=0 a3=7ffea289002c items=0 ppid=2920 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.389000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:37.389000 audit[3082]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 15 05:47:37.389000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffea2890040 a2=0 a3=7ffea289002c items=0 ppid=2920 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:37.389000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:37.907975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276821511.mount: Deactivated successfully. Jan 15 05:47:37.960214 kubelet[2767]: E0115 05:47:37.960136 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:38.220584 kubelet[2767]: E0115 05:47:38.219618 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:38.636448 kubelet[2767]: E0115 05:47:38.636287 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:39.221439 kubelet[2767]: E0115 05:47:39.221304 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:39.825222 containerd[1600]: time="2026-01-15T05:47:39.825165324Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:39.826384 containerd[1600]: time="2026-01-15T05:47:39.826287564Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 15 05:47:39.827651 containerd[1600]: time="2026-01-15T05:47:39.827563078Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:39.829774 containerd[1600]: time="2026-01-15T05:47:39.829735343Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:39.830300 containerd[1600]: time="2026-01-15T05:47:39.830267971Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.124386748s" Jan 15 05:47:39.830389 containerd[1600]: time="2026-01-15T05:47:39.830303256Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 15 05:47:39.836101 containerd[1600]: time="2026-01-15T05:47:39.835971741Z" level=info msg="CreateContainer within sandbox \"c4cff8ff53023c627db521b0dc2aa914f33d8b6d96eff0fc6fab0ac1eeffc761\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 05:47:39.845574 containerd[1600]: time="2026-01-15T05:47:39.845504054Z" level=info msg="Container 709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:47:39.852206 containerd[1600]: time="2026-01-15T05:47:39.852107708Z" level=info msg="CreateContainer within sandbox \"c4cff8ff53023c627db521b0dc2aa914f33d8b6d96eff0fc6fab0ac1eeffc761\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699\"" Jan 15 05:47:39.852819 containerd[1600]: time="2026-01-15T05:47:39.852741866Z" level=info msg="StartContainer for \"709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699\"" Jan 15 05:47:39.853734 containerd[1600]: time="2026-01-15T05:47:39.853688129Z" level=info msg="connecting to shim 709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699" address="unix:///run/containerd/s/31cdaa86261d13f42355d8ab1de66e5a7a194a0fba526f60dd35d871a721771b" protocol=ttrpc version=3 Jan 15 05:47:39.882566 systemd[1]: Started cri-containerd-709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699.scope - libcontainer container 709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699. Jan 15 05:47:39.900000 audit: BPF prog-id=146 op=LOAD Jan 15 05:47:39.900000 audit: BPF prog-id=147 op=LOAD Jan 15 05:47:39.900000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2877 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:39.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730396433333361386332373561333764313138356230386166393033 Jan 15 05:47:39.900000 audit: BPF prog-id=147 op=UNLOAD Jan 15 05:47:39.900000 audit[3091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:39.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730396433333361386332373561333764313138356230386166393033 Jan 15 05:47:39.900000 audit: BPF prog-id=148 op=LOAD Jan 15 05:47:39.900000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2877 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:39.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730396433333361386332373561333764313138356230386166393033 Jan 15 05:47:39.901000 audit: BPF prog-id=149 op=LOAD Jan 15 05:47:39.901000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2877 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:39.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730396433333361386332373561333764313138356230386166393033 Jan 15 05:47:39.901000 audit: BPF prog-id=149 op=UNLOAD Jan 15 05:47:39.901000 audit[3091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:39.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730396433333361386332373561333764313138356230386166393033 Jan 15 05:47:39.901000 audit: BPF prog-id=148 op=UNLOAD Jan 15 05:47:39.901000 audit[3091]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:39.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730396433333361386332373561333764313138356230386166393033 Jan 15 05:47:39.901000 audit: BPF prog-id=150 op=LOAD Jan 15 05:47:39.901000 audit[3091]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2877 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:39.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730396433333361386332373561333764313138356230386166393033 Jan 15 05:47:39.921776 containerd[1600]: time="2026-01-15T05:47:39.921728637Z" level=info msg="StartContainer for \"709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699\" returns successfully" Jan 15 05:47:40.225216 kubelet[2767]: E0115 05:47:40.225056 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:42.103204 systemd[1]: cri-containerd-709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699.scope: Deactivated successfully. Jan 15 05:47:42.107380 containerd[1600]: time="2026-01-15T05:47:42.106560805Z" level=info msg="received container exit event container_id:\"709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699\" id:\"709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699\" pid:3105 exit_status:1 exited_at:{seconds:1768456062 nanos:105835720}" Jan 15 05:47:42.112496 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 15 05:47:42.112591 kernel: audit: type=1334 audit(1768456062.109:510): prog-id=146 op=UNLOAD Jan 15 05:47:42.109000 audit: BPF prog-id=146 op=UNLOAD Jan 15 05:47:42.109000 audit: BPF prog-id=150 op=UNLOAD Jan 15 05:47:42.116016 kernel: audit: type=1334 audit(1768456062.109:511): prog-id=150 op=UNLOAD Jan 15 05:47:42.162995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699-rootfs.mount: Deactivated successfully. Jan 15 05:47:42.178268 kubelet[2767]: I0115 05:47:42.178200 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-5f7nv" podStartSLOduration=3.049017569 podStartE2EDuration="6.178176711s" podCreationTimestamp="2026-01-15 05:47:36 +0000 UTC" firstStartedPulling="2026-01-15 05:47:36.702169514 +0000 UTC m=+4.764266908" lastFinishedPulling="2026-01-15 05:47:39.831328656 +0000 UTC m=+7.893426050" observedRunningTime="2026-01-15 05:47:40.235977513 +0000 UTC m=+8.298074907" watchObservedRunningTime="2026-01-15 05:47:42.178176711 +0000 UTC m=+10.240274106" Jan 15 05:47:43.233754 kubelet[2767]: I0115 05:47:43.233684 2767 scope.go:117] "RemoveContainer" containerID="709d333a8c275a37d1185b08af903bf9af292687befd531abc001db366631699" Jan 15 05:47:43.236238 containerd[1600]: time="2026-01-15T05:47:43.236150301Z" level=info msg="CreateContainer within sandbox \"c4cff8ff53023c627db521b0dc2aa914f33d8b6d96eff0fc6fab0ac1eeffc761\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 15 05:47:43.251161 containerd[1600]: time="2026-01-15T05:47:43.251111627Z" level=info msg="Container d37186e826b5501cbeb444a50c8fb6d2374bf328e771b932d15a0c7d8c905078: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:47:43.254427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3487337723.mount: Deactivated successfully. Jan 15 05:47:43.259913 containerd[1600]: time="2026-01-15T05:47:43.259794762Z" level=info msg="CreateContainer within sandbox \"c4cff8ff53023c627db521b0dc2aa914f33d8b6d96eff0fc6fab0ac1eeffc761\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d37186e826b5501cbeb444a50c8fb6d2374bf328e771b932d15a0c7d8c905078\"" Jan 15 05:47:43.260530 containerd[1600]: time="2026-01-15T05:47:43.260476271Z" level=info msg="StartContainer for \"d37186e826b5501cbeb444a50c8fb6d2374bf328e771b932d15a0c7d8c905078\"" Jan 15 05:47:43.263398 containerd[1600]: time="2026-01-15T05:47:43.262225353Z" level=info msg="connecting to shim d37186e826b5501cbeb444a50c8fb6d2374bf328e771b932d15a0c7d8c905078" address="unix:///run/containerd/s/31cdaa86261d13f42355d8ab1de66e5a7a194a0fba526f60dd35d871a721771b" protocol=ttrpc version=3 Jan 15 05:47:43.292549 systemd[1]: Started cri-containerd-d37186e826b5501cbeb444a50c8fb6d2374bf328e771b932d15a0c7d8c905078.scope - libcontainer container d37186e826b5501cbeb444a50c8fb6d2374bf328e771b932d15a0c7d8c905078. Jan 15 05:47:43.312000 audit: BPF prog-id=151 op=LOAD Jan 15 05:47:43.317023 kernel: audit: type=1334 audit(1768456063.312:512): prog-id=151 op=LOAD Jan 15 05:47:43.317098 kernel: audit: type=1334 audit(1768456063.313:513): prog-id=152 op=LOAD Jan 15 05:47:43.313000 audit: BPF prog-id=152 op=LOAD Jan 15 05:47:43.313000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2877 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:43.324492 kernel: audit: type=1300 audit(1768456063.313:513): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2877 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:43.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433373138366538323662353530316362656234343461353063386662 Jan 15 05:47:43.331681 kernel: audit: type=1327 audit(1768456063.313:513): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433373138366538323662353530316362656234343461353063386662 Jan 15 05:47:43.334826 kernel: audit: type=1334 audit(1768456063.313:514): prog-id=152 op=UNLOAD Jan 15 05:47:43.313000 audit: BPF prog-id=152 op=UNLOAD Jan 15 05:47:43.313000 audit[3167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:43.343494 kernel: audit: type=1300 audit(1768456063.313:514): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:43.343563 kernel: audit: type=1327 audit(1768456063.313:514): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433373138366538323662353530316362656234343461353063386662 Jan 15 05:47:43.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433373138366538323662353530316362656234343461353063386662 Jan 15 05:47:43.313000 audit: BPF prog-id=153 op=LOAD Jan 15 05:47:43.355512 kernel: audit: type=1334 audit(1768456063.313:515): prog-id=153 op=LOAD Jan 15 05:47:43.313000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2877 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:43.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433373138366538323662353530316362656234343461353063386662 Jan 15 05:47:43.313000 audit: BPF prog-id=154 op=LOAD Jan 15 05:47:43.313000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2877 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:43.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433373138366538323662353530316362656234343461353063386662 Jan 15 05:47:43.313000 audit: BPF prog-id=154 op=UNLOAD Jan 15 05:47:43.313000 audit[3167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:43.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433373138366538323662353530316362656234343461353063386662 Jan 15 05:47:43.314000 audit: BPF prog-id=153 op=UNLOAD Jan 15 05:47:43.314000 audit[3167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:43.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433373138366538323662353530316362656234343461353063386662 Jan 15 05:47:43.314000 audit: BPF prog-id=155 op=LOAD Jan 15 05:47:43.314000 audit[3167]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2877 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:43.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433373138366538323662353530316362656234343461353063386662 Jan 15 05:47:43.361581 containerd[1600]: time="2026-01-15T05:47:43.361424225Z" level=info msg="StartContainer for \"d37186e826b5501cbeb444a50c8fb6d2374bf328e771b932d15a0c7d8c905078\" returns successfully" Jan 15 05:47:44.461665 update_engine[1583]: I20260115 05:47:44.461545 1583 update_attempter.cc:509] Updating boot flags... Jan 15 05:47:45.359566 sudo[1820]: pam_unix(sudo:session): session closed for user root Jan 15 05:47:45.359000 audit[1820]: USER_END pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:47:45.359000 audit[1820]: CRED_DISP pid=1820 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 15 05:47:45.364533 sshd[1819]: Connection closed by 10.0.0.1 port 54426 Jan 15 05:47:45.365470 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jan 15 05:47:45.367000 audit[1815]: USER_END pid=1815 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:47:45.367000 audit[1815]: CRED_DISP pid=1815 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:47:45.371682 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:54426.service: Deactivated successfully. Jan 15 05:47:45.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.92:22-10.0.0.1:54426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:47:45.374821 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 05:47:45.375251 systemd[1]: session-8.scope: Consumed 6.818s CPU time, 213.5M memory peak. Jan 15 05:47:45.376959 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Jan 15 05:47:45.378640 systemd-logind[1577]: Removed session 8. Jan 15 05:47:47.281463 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 15 05:47:47.281623 kernel: audit: type=1325 audit(1768456067.275:525): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:47.275000 audit[3245]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:47.275000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff058481d0 a2=0 a3=7fff058481bc items=0 ppid=2920 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:47.294457 kernel: audit: type=1300 audit(1768456067.275:525): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff058481d0 a2=0 a3=7fff058481bc items=0 ppid=2920 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:47.275000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:47.300402 kernel: audit: type=1327 audit(1768456067.275:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:47.307000 audit[3245]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:47.314407 kernel: audit: type=1325 audit(1768456067.307:526): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:47.307000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff058481d0 a2=0 a3=0 items=0 ppid=2920 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:47.326400 kernel: audit: type=1300 audit(1768456067.307:526): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff058481d0 a2=0 a3=0 items=0 ppid=2920 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:47.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:47.335397 kernel: audit: type=1327 audit(1768456067.307:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:47.396000 audit[3247]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:47.402575 kernel: audit: type=1325 audit(1768456067.396:527): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:47.396000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffce7d17720 a2=0 a3=7ffce7d1770c items=0 ppid=2920 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:47.413791 kernel: audit: type=1300 audit(1768456067.396:527): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffce7d17720 a2=0 a3=7ffce7d1770c items=0 ppid=2920 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:47.396000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:47.423032 kernel: audit: type=1327 audit(1768456067.396:527): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:47.423078 kernel: audit: type=1325 audit(1768456067.413:528): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:47.413000 audit[3247]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:47.413000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce7d17720 a2=0 a3=0 items=0 ppid=2920 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:47.413000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:49.480000 audit[3249]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:49.480000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffef0fc2d00 a2=0 a3=7ffef0fc2cec items=0 ppid=2920 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:49.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:49.484000 audit[3249]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:49.484000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef0fc2d00 a2=0 a3=0 items=0 ppid=2920 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:49.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:49.511000 audit[3251]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:49.511000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff8aa1cff0 a2=0 a3=7fff8aa1cfdc items=0 ppid=2920 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:49.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:49.517000 audit[3251]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:49.517000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8aa1cff0 a2=0 a3=0 items=0 ppid=2920 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:49.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:50.531000 audit[3253]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:50.531000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe45e24cd0 a2=0 a3=7ffe45e24cbc items=0 ppid=2920 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:50.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:50.536000 audit[3253]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:50.536000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe45e24cd0 a2=0 a3=0 items=0 ppid=2920 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:50.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:51.140776 systemd[1]: Created slice kubepods-besteffort-pod341fa87a_5990_446a_ab5a_e838accfea57.slice - libcontainer container kubepods-besteffort-pod341fa87a_5990_446a_ab5a_e838accfea57.slice. Jan 15 05:47:51.244072 kubelet[2767]: I0115 05:47:51.243976 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/341fa87a-5990-446a-ab5a-e838accfea57-tigera-ca-bundle\") pod \"calico-typha-5447787d76-8xpl2\" (UID: \"341fa87a-5990-446a-ab5a-e838accfea57\") " pod="calico-system/calico-typha-5447787d76-8xpl2" Jan 15 05:47:51.244072 kubelet[2767]: I0115 05:47:51.244022 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/341fa87a-5990-446a-ab5a-e838accfea57-typha-certs\") pod \"calico-typha-5447787d76-8xpl2\" (UID: \"341fa87a-5990-446a-ab5a-e838accfea57\") " pod="calico-system/calico-typha-5447787d76-8xpl2" Jan 15 05:47:51.244072 kubelet[2767]: I0115 05:47:51.244042 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5mh\" (UniqueName: \"kubernetes.io/projected/341fa87a-5990-446a-ab5a-e838accfea57-kube-api-access-zt5mh\") pod \"calico-typha-5447787d76-8xpl2\" (UID: \"341fa87a-5990-446a-ab5a-e838accfea57\") " pod="calico-system/calico-typha-5447787d76-8xpl2" Jan 15 05:47:51.410815 systemd[1]: Created slice kubepods-besteffort-pod50762457_1de3_4ad8_a324_b2976d17becf.slice - libcontainer container kubepods-besteffort-pod50762457_1de3_4ad8_a324_b2976d17becf.slice. Jan 15 05:47:51.445499 kubelet[2767]: I0115 05:47:51.445328 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/50762457-1de3-4ad8-a324-b2976d17becf-cni-net-dir\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.445499 kubelet[2767]: I0115 05:47:51.445488 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/50762457-1de3-4ad8-a324-b2976d17becf-cni-bin-dir\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.445678 kubelet[2767]: I0115 05:47:51.445515 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50762457-1de3-4ad8-a324-b2976d17becf-lib-modules\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.445678 kubelet[2767]: I0115 05:47:51.445538 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/50762457-1de3-4ad8-a324-b2976d17becf-cni-log-dir\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.445678 kubelet[2767]: I0115 05:47:51.445581 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr2r6\" (UniqueName: \"kubernetes.io/projected/50762457-1de3-4ad8-a324-b2976d17becf-kube-api-access-cr2r6\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.445678 kubelet[2767]: I0115 05:47:51.445611 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50762457-1de3-4ad8-a324-b2976d17becf-tigera-ca-bundle\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.445678 kubelet[2767]: I0115 05:47:51.445632 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/50762457-1de3-4ad8-a324-b2976d17becf-var-lib-calico\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.445846 kubelet[2767]: I0115 05:47:51.445658 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/50762457-1de3-4ad8-a324-b2976d17becf-flexvol-driver-host\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.445846 kubelet[2767]: I0115 05:47:51.445680 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/50762457-1de3-4ad8-a324-b2976d17becf-node-certs\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.445846 kubelet[2767]: I0115 05:47:51.445701 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/50762457-1de3-4ad8-a324-b2976d17becf-policysync\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.445846 kubelet[2767]: I0115 05:47:51.445749 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50762457-1de3-4ad8-a324-b2976d17becf-xtables-lock\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.445846 kubelet[2767]: I0115 05:47:51.445772 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/50762457-1de3-4ad8-a324-b2976d17becf-var-run-calico\") pod \"calico-node-hpwzd\" (UID: \"50762457-1de3-4ad8-a324-b2976d17becf\") " pod="calico-system/calico-node-hpwzd" Jan 15 05:47:51.448774 kubelet[2767]: E0115 05:47:51.448586 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:51.449283 containerd[1600]: time="2026-01-15T05:47:51.449181877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5447787d76-8xpl2,Uid:341fa87a-5990-446a-ab5a-e838accfea57,Namespace:calico-system,Attempt:0,}" Jan 15 05:47:51.489573 containerd[1600]: time="2026-01-15T05:47:51.489434223Z" level=info msg="connecting to shim 97e43b96bed6a47565b75425561c918c2206bdbca01a8c09f7a334a999530033" address="unix:///run/containerd/s/f17c415d52af9620d0dcbcddde58366be81998f819c6d5938abecfc09dad1dd4" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:47:51.542967 systemd[1]: Started cri-containerd-97e43b96bed6a47565b75425561c918c2206bdbca01a8c09f7a334a999530033.scope - libcontainer container 97e43b96bed6a47565b75425561c918c2206bdbca01a8c09f7a334a999530033. Jan 15 05:47:51.566575 kubelet[2767]: E0115 05:47:51.566507 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.566575 kubelet[2767]: W0115 05:47:51.566531 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.566575 kubelet[2767]: E0115 05:47:51.566554 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.570408 kubelet[2767]: E0115 05:47:51.569957 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.570408 kubelet[2767]: W0115 05:47:51.569973 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.570408 kubelet[2767]: E0115 05:47:51.569987 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.571000 audit[3298]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:51.571000 audit[3298]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc2c010cc0 a2=0 a3=7ffc2c010cac items=0 ppid=2920 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.571000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:51.578000 audit[3298]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:51.578000 audit[3298]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc2c010cc0 a2=0 a3=0 items=0 ppid=2920 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:51.584000 audit: BPF prog-id=156 op=LOAD Jan 15 05:47:51.586000 audit: BPF prog-id=157 op=LOAD Jan 15 05:47:51.586000 audit[3278]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3268 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937653433623936626564366134373536356237353432353536316339 Jan 15 05:47:51.586000 audit: BPF prog-id=157 op=UNLOAD Jan 15 05:47:51.586000 audit[3278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3268 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937653433623936626564366134373536356237353432353536316339 Jan 15 05:47:51.586000 audit: BPF prog-id=158 op=LOAD Jan 15 05:47:51.586000 audit[3278]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3268 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937653433623936626564366134373536356237353432353536316339 Jan 15 05:47:51.586000 audit: BPF prog-id=159 op=LOAD Jan 15 05:47:51.586000 audit[3278]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3268 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937653433623936626564366134373536356237353432353536316339 Jan 15 05:47:51.586000 audit: BPF prog-id=159 op=UNLOAD Jan 15 05:47:51.586000 audit[3278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3268 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937653433623936626564366134373536356237353432353536316339 Jan 15 05:47:51.586000 audit: BPF prog-id=158 op=UNLOAD Jan 15 05:47:51.586000 audit[3278]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3268 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937653433623936626564366134373536356237353432353536316339 Jan 15 05:47:51.586000 audit: BPF prog-id=160 op=LOAD Jan 15 05:47:51.586000 audit[3278]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3268 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937653433623936626564366134373536356237353432353536316339 Jan 15 05:47:51.604182 kubelet[2767]: E0115 05:47:51.603728 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:47:51.627081 kubelet[2767]: E0115 05:47:51.626880 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.627081 kubelet[2767]: W0115 05:47:51.626927 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.627081 kubelet[2767]: E0115 05:47:51.626946 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.627319 kubelet[2767]: E0115 05:47:51.627305 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.627414 kubelet[2767]: W0115 05:47:51.627401 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.627519 kubelet[2767]: E0115 05:47:51.627501 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.628071 kubelet[2767]: E0115 05:47:51.627862 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.628071 kubelet[2767]: W0115 05:47:51.627874 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.628071 kubelet[2767]: E0115 05:47:51.627883 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.628285 kubelet[2767]: E0115 05:47:51.628272 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.628333 kubelet[2767]: W0115 05:47:51.628323 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.628451 kubelet[2767]: E0115 05:47:51.628428 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.628936 kubelet[2767]: E0115 05:47:51.628920 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.629104 kubelet[2767]: W0115 05:47:51.628994 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.629104 kubelet[2767]: E0115 05:47:51.629007 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.629606 kubelet[2767]: E0115 05:47:51.629591 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.629688 kubelet[2767]: W0115 05:47:51.629665 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.629743 kubelet[2767]: E0115 05:47:51.629732 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.630174 kubelet[2767]: E0115 05:47:51.630137 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.630393 kubelet[2767]: W0115 05:47:51.630220 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.630393 kubelet[2767]: E0115 05:47:51.630313 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.630873 kubelet[2767]: E0115 05:47:51.630840 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.631133 kubelet[2767]: W0115 05:47:51.630991 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.631133 kubelet[2767]: E0115 05:47:51.631004 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.631995 kubelet[2767]: E0115 05:47:51.631977 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.632261 kubelet[2767]: W0115 05:47:51.632122 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.632261 kubelet[2767]: E0115 05:47:51.632139 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.632535 kubelet[2767]: E0115 05:47:51.632452 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.632535 kubelet[2767]: W0115 05:47:51.632531 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.632589 kubelet[2767]: E0115 05:47:51.632543 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.634129 kubelet[2767]: E0115 05:47:51.634027 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.634129 kubelet[2767]: W0115 05:47:51.634044 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.634129 kubelet[2767]: E0115 05:47:51.634056 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.634662 kubelet[2767]: E0115 05:47:51.634623 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.634662 kubelet[2767]: W0115 05:47:51.634638 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.634662 kubelet[2767]: E0115 05:47:51.634651 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.638283 kubelet[2767]: E0115 05:47:51.637771 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.638283 kubelet[2767]: W0115 05:47:51.637785 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.638283 kubelet[2767]: E0115 05:47:51.637799 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.639185 kubelet[2767]: E0115 05:47:51.639044 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.639389 kubelet[2767]: W0115 05:47:51.639301 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.639750 kubelet[2767]: E0115 05:47:51.639698 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.641042 kubelet[2767]: E0115 05:47:51.640974 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.641285 kubelet[2767]: W0115 05:47:51.641233 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.641500 kubelet[2767]: E0115 05:47:51.641436 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.643652 kubelet[2767]: E0115 05:47:51.643620 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.643652 kubelet[2767]: W0115 05:47:51.643652 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.643751 kubelet[2767]: E0115 05:47:51.643666 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.646408 kubelet[2767]: E0115 05:47:51.644317 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.646408 kubelet[2767]: W0115 05:47:51.644332 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.646408 kubelet[2767]: E0115 05:47:51.644421 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.646408 kubelet[2767]: E0115 05:47:51.644815 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.646408 kubelet[2767]: W0115 05:47:51.644828 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.646408 kubelet[2767]: E0115 05:47:51.644839 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.646408 kubelet[2767]: E0115 05:47:51.645176 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.646408 kubelet[2767]: W0115 05:47:51.645188 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.646408 kubelet[2767]: E0115 05:47:51.645200 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.646408 kubelet[2767]: E0115 05:47:51.646002 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.646627 kubelet[2767]: W0115 05:47:51.646015 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.646627 kubelet[2767]: E0115 05:47:51.646104 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.647751 kubelet[2767]: E0115 05:47:51.647680 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.647805 kubelet[2767]: W0115 05:47:51.647763 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.647805 kubelet[2767]: E0115 05:47:51.647774 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.647853 kubelet[2767]: I0115 05:47:51.647805 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdab6cdb-eee3-4132-9980-23cedc6f5612-kubelet-dir\") pod \"csi-node-driver-dt9mp\" (UID: \"cdab6cdb-eee3-4132-9980-23cedc6f5612\") " pod="calico-system/csi-node-driver-dt9mp" Jan 15 05:47:51.648521 kubelet[2767]: E0115 05:47:51.648498 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.648592 kubelet[2767]: W0115 05:47:51.648579 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.648664 kubelet[2767]: E0115 05:47:51.648650 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.649617 kubelet[2767]: E0115 05:47:51.649390 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.649716 kubelet[2767]: W0115 05:47:51.649699 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.649776 kubelet[2767]: E0115 05:47:51.649764 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.650217 kubelet[2767]: E0115 05:47:51.650148 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.650322 kubelet[2767]: W0115 05:47:51.650301 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.650499 kubelet[2767]: E0115 05:47:51.650484 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.650660 kubelet[2767]: I0115 05:47:51.650591 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdab6cdb-eee3-4132-9980-23cedc6f5612-socket-dir\") pod \"csi-node-driver-dt9mp\" (UID: \"cdab6cdb-eee3-4132-9980-23cedc6f5612\") " pod="calico-system/csi-node-driver-dt9mp" Jan 15 05:47:51.651259 kubelet[2767]: E0115 05:47:51.651236 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.651325 kubelet[2767]: W0115 05:47:51.651312 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.651440 kubelet[2767]: E0115 05:47:51.651428 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.651979 kubelet[2767]: E0115 05:47:51.651966 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.652034 kubelet[2767]: W0115 05:47:51.652023 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.652076 kubelet[2767]: E0115 05:47:51.652067 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.652713 kubelet[2767]: E0115 05:47:51.652700 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.652782 kubelet[2767]: W0115 05:47:51.652770 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.652826 kubelet[2767]: E0115 05:47:51.652816 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.653073 kubelet[2767]: I0115 05:47:51.653016 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdab6cdb-eee3-4132-9980-23cedc6f5612-registration-dir\") pod \"csi-node-driver-dt9mp\" (UID: \"cdab6cdb-eee3-4132-9980-23cedc6f5612\") " pod="calico-system/csi-node-driver-dt9mp" Jan 15 05:47:51.654538 kubelet[2767]: E0115 05:47:51.654522 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.654610 kubelet[2767]: W0115 05:47:51.654598 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.654657 kubelet[2767]: E0115 05:47:51.654647 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.655184 kubelet[2767]: E0115 05:47:51.655169 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.655245 kubelet[2767]: W0115 05:47:51.655234 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.655304 kubelet[2767]: E0115 05:47:51.655293 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.657481 kubelet[2767]: E0115 05:47:51.657461 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.657563 kubelet[2767]: W0115 05:47:51.657550 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.657612 kubelet[2767]: E0115 05:47:51.657601 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.657931 containerd[1600]: time="2026-01-15T05:47:51.657834363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5447787d76-8xpl2,Uid:341fa87a-5990-446a-ab5a-e838accfea57,Namespace:calico-system,Attempt:0,} returns sandbox id \"97e43b96bed6a47565b75425561c918c2206bdbca01a8c09f7a334a999530033\"" Jan 15 05:47:51.658075 kubelet[2767]: I0115 05:47:51.657717 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtr9n\" (UniqueName: \"kubernetes.io/projected/cdab6cdb-eee3-4132-9980-23cedc6f5612-kube-api-access-wtr9n\") pod \"csi-node-driver-dt9mp\" (UID: \"cdab6cdb-eee3-4132-9980-23cedc6f5612\") " pod="calico-system/csi-node-driver-dt9mp" Jan 15 05:47:51.659802 kubelet[2767]: E0115 05:47:51.659784 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:51.661494 kubelet[2767]: E0115 05:47:51.659885 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.661581 kubelet[2767]: W0115 05:47:51.661567 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.661632 kubelet[2767]: E0115 05:47:51.661621 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.662184 kubelet[2767]: E0115 05:47:51.662170 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.663104 kubelet[2767]: W0115 05:47:51.663014 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.663282 kubelet[2767]: E0115 05:47:51.663261 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.663790 containerd[1600]: time="2026-01-15T05:47:51.663737405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 15 05:47:51.665174 kubelet[2767]: E0115 05:47:51.665124 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.665174 kubelet[2767]: W0115 05:47:51.665166 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.665428 kubelet[2767]: E0115 05:47:51.665285 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.665481 kubelet[2767]: I0115 05:47:51.665439 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cdab6cdb-eee3-4132-9980-23cedc6f5612-varrun\") pod \"csi-node-driver-dt9mp\" (UID: \"cdab6cdb-eee3-4132-9980-23cedc6f5612\") " pod="calico-system/csi-node-driver-dt9mp" Jan 15 05:47:51.666755 kubelet[2767]: E0115 05:47:51.666716 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.666755 kubelet[2767]: W0115 05:47:51.666752 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.666830 kubelet[2767]: E0115 05:47:51.666767 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.667797 kubelet[2767]: E0115 05:47:51.667766 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.668207 kubelet[2767]: W0115 05:47:51.668150 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.668287 kubelet[2767]: E0115 05:47:51.668252 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.716597 kubelet[2767]: E0115 05:47:51.716524 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:51.717988 containerd[1600]: time="2026-01-15T05:47:51.717529578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hpwzd,Uid:50762457-1de3-4ad8-a324-b2976d17becf,Namespace:calico-system,Attempt:0,}" Jan 15 05:47:51.758488 containerd[1600]: time="2026-01-15T05:47:51.758075215Z" level=info msg="connecting to shim 25a35161b2030ffbfb823f7c1fcbae84cc93f625072adf9bcbdeb16510087c95" address="unix:///run/containerd/s/e9633b176eb0a5917646f05107573d97bf0a12282bd49ea8526088552f7023f8" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:47:51.768056 kubelet[2767]: E0115 05:47:51.768013 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.768056 kubelet[2767]: W0115 05:47:51.768034 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.768056 kubelet[2767]: E0115 05:47:51.768052 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.768673 kubelet[2767]: E0115 05:47:51.768613 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.768673 kubelet[2767]: W0115 05:47:51.768644 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.768673 kubelet[2767]: E0115 05:47:51.768655 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.770638 kubelet[2767]: E0115 05:47:51.770566 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.770737 kubelet[2767]: W0115 05:47:51.770687 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.770737 kubelet[2767]: E0115 05:47:51.770720 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.771403 kubelet[2767]: E0115 05:47:51.771279 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.771403 kubelet[2767]: W0115 05:47:51.771387 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.771403 kubelet[2767]: E0115 05:47:51.771399 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.772835 kubelet[2767]: E0115 05:47:51.772742 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.772835 kubelet[2767]: W0115 05:47:51.772787 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.772835 kubelet[2767]: E0115 05:47:51.772811 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.773515 kubelet[2767]: E0115 05:47:51.773419 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.773515 kubelet[2767]: W0115 05:47:51.773459 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.773515 kubelet[2767]: E0115 05:47:51.773472 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.774206 kubelet[2767]: E0115 05:47:51.774129 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.774206 kubelet[2767]: W0115 05:47:51.774159 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.774282 kubelet[2767]: E0115 05:47:51.774220 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.774981 kubelet[2767]: E0115 05:47:51.774850 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.774981 kubelet[2767]: W0115 05:47:51.774881 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.774981 kubelet[2767]: E0115 05:47:51.774961 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.775972 kubelet[2767]: E0115 05:47:51.775852 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.775972 kubelet[2767]: W0115 05:47:51.775880 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.776078 kubelet[2767]: E0115 05:47:51.775990 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.777229 kubelet[2767]: E0115 05:47:51.777157 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.777597 kubelet[2767]: W0115 05:47:51.777568 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.777597 kubelet[2767]: E0115 05:47:51.777588 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.778933 kubelet[2767]: E0115 05:47:51.778726 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.778933 kubelet[2767]: W0115 05:47:51.778764 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.778933 kubelet[2767]: E0115 05:47:51.778776 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.779975 kubelet[2767]: E0115 05:47:51.779847 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.779975 kubelet[2767]: W0115 05:47:51.779882 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.779975 kubelet[2767]: E0115 05:47:51.779920 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.780618 kubelet[2767]: E0115 05:47:51.780578 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.780618 kubelet[2767]: W0115 05:47:51.780606 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.780618 kubelet[2767]: E0115 05:47:51.780617 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.781148 kubelet[2767]: E0115 05:47:51.781137 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.781148 kubelet[2767]: W0115 05:47:51.781149 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.781493 kubelet[2767]: E0115 05:47:51.781160 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.782236 kubelet[2767]: E0115 05:47:51.782177 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.782236 kubelet[2767]: W0115 05:47:51.782216 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.782236 kubelet[2767]: E0115 05:47:51.782228 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.783336 kubelet[2767]: E0115 05:47:51.782668 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.783336 kubelet[2767]: W0115 05:47:51.782684 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.783336 kubelet[2767]: E0115 05:47:51.782702 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.783336 kubelet[2767]: E0115 05:47:51.783164 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.783336 kubelet[2767]: W0115 05:47:51.783174 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.783336 kubelet[2767]: E0115 05:47:51.783183 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.784225 kubelet[2767]: E0115 05:47:51.784173 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.784280 kubelet[2767]: W0115 05:47:51.784261 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.784280 kubelet[2767]: E0115 05:47:51.784274 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.784995 kubelet[2767]: E0115 05:47:51.784945 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.784995 kubelet[2767]: W0115 05:47:51.784971 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.784995 kubelet[2767]: E0115 05:47:51.784983 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.786785 kubelet[2767]: E0115 05:47:51.786767 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.786958 kubelet[2767]: W0115 05:47:51.786943 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.787035 kubelet[2767]: E0115 05:47:51.787023 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.787534 kubelet[2767]: E0115 05:47:51.787521 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.787607 kubelet[2767]: W0115 05:47:51.787596 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.787660 kubelet[2767]: E0115 05:47:51.787649 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.788464 kubelet[2767]: E0115 05:47:51.788445 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.788549 kubelet[2767]: W0115 05:47:51.788532 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.788642 kubelet[2767]: E0115 05:47:51.788625 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.789204 kubelet[2767]: E0115 05:47:51.789187 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.789292 kubelet[2767]: W0115 05:47:51.789272 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.789446 kubelet[2767]: E0115 05:47:51.789426 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.790503 kubelet[2767]: E0115 05:47:51.790488 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.790577 kubelet[2767]: W0115 05:47:51.790565 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.790640 kubelet[2767]: E0115 05:47:51.790628 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.791025 kubelet[2767]: E0115 05:47:51.791012 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.791080 kubelet[2767]: W0115 05:47:51.791069 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.791123 kubelet[2767]: E0115 05:47:51.791113 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.798776 systemd[1]: Started cri-containerd-25a35161b2030ffbfb823f7c1fcbae84cc93f625072adf9bcbdeb16510087c95.scope - libcontainer container 25a35161b2030ffbfb823f7c1fcbae84cc93f625072adf9bcbdeb16510087c95. Jan 15 05:47:51.805687 kubelet[2767]: E0115 05:47:51.805628 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:51.805687 kubelet[2767]: W0115 05:47:51.805644 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:51.805687 kubelet[2767]: E0115 05:47:51.805658 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:51.817000 audit: BPF prog-id=161 op=LOAD Jan 15 05:47:51.817000 audit: BPF prog-id=162 op=LOAD Jan 15 05:47:51.817000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3366 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613335313631623230333066666266623832336637633166636261 Jan 15 05:47:51.817000 audit: BPF prog-id=162 op=UNLOAD Jan 15 05:47:51.817000 audit[3377]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613335313631623230333066666266623832336637633166636261 Jan 15 05:47:51.818000 audit: BPF prog-id=163 op=LOAD Jan 15 05:47:51.818000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3366 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613335313631623230333066666266623832336637633166636261 Jan 15 05:47:51.819000 audit: BPF prog-id=164 op=LOAD Jan 15 05:47:51.819000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3366 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613335313631623230333066666266623832336637633166636261 Jan 15 05:47:51.819000 audit: BPF prog-id=164 op=UNLOAD Jan 15 05:47:51.819000 audit[3377]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613335313631623230333066666266623832336637633166636261 Jan 15 05:47:51.819000 audit: BPF prog-id=163 op=UNLOAD Jan 15 05:47:51.819000 audit[3377]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613335313631623230333066666266623832336637633166636261 Jan 15 05:47:51.819000 audit: BPF prog-id=165 op=LOAD Jan 15 05:47:51.819000 audit[3377]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3366 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:51.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613335313631623230333066666266623832336637633166636261 Jan 15 05:47:51.840884 containerd[1600]: time="2026-01-15T05:47:51.840806530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hpwzd,Uid:50762457-1de3-4ad8-a324-b2976d17becf,Namespace:calico-system,Attempt:0,} returns sandbox id \"25a35161b2030ffbfb823f7c1fcbae84cc93f625072adf9bcbdeb16510087c95\"" Jan 15 05:47:51.841839 kubelet[2767]: E0115 05:47:51.841768 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:52.886038 containerd[1600]: time="2026-01-15T05:47:52.885929606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:52.887141 containerd[1600]: time="2026-01-15T05:47:52.887029362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 15 05:47:52.888442 containerd[1600]: time="2026-01-15T05:47:52.888390259Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:52.890739 containerd[1600]: time="2026-01-15T05:47:52.890677225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:52.891215 containerd[1600]: time="2026-01-15T05:47:52.891163802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.22732114s" Jan 15 05:47:52.891215 containerd[1600]: time="2026-01-15T05:47:52.891205430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 15 05:47:52.891988 containerd[1600]: time="2026-01-15T05:47:52.891943796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 15 05:47:52.908078 containerd[1600]: time="2026-01-15T05:47:52.907934800Z" level=info msg="CreateContainer within sandbox \"97e43b96bed6a47565b75425561c918c2206bdbca01a8c09f7a334a999530033\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 05:47:52.918403 containerd[1600]: time="2026-01-15T05:47:52.916674239Z" level=info msg="Container b67d8054dd4e56f140ae8cfb67883843f738dd17b5b5cc8d286b8a5339b9c5d8: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:47:52.925477 containerd[1600]: time="2026-01-15T05:47:52.925423072Z" level=info msg="CreateContainer within sandbox \"97e43b96bed6a47565b75425561c918c2206bdbca01a8c09f7a334a999530033\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b67d8054dd4e56f140ae8cfb67883843f738dd17b5b5cc8d286b8a5339b9c5d8\"" Jan 15 05:47:52.926080 containerd[1600]: time="2026-01-15T05:47:52.926024645Z" level=info msg="StartContainer for \"b67d8054dd4e56f140ae8cfb67883843f738dd17b5b5cc8d286b8a5339b9c5d8\"" Jan 15 05:47:52.927502 containerd[1600]: time="2026-01-15T05:47:52.927426834Z" level=info msg="connecting to shim b67d8054dd4e56f140ae8cfb67883843f738dd17b5b5cc8d286b8a5339b9c5d8" address="unix:///run/containerd/s/f17c415d52af9620d0dcbcddde58366be81998f819c6d5938abecfc09dad1dd4" protocol=ttrpc version=3 Jan 15 05:47:52.968599 systemd[1]: Started cri-containerd-b67d8054dd4e56f140ae8cfb67883843f738dd17b5b5cc8d286b8a5339b9c5d8.scope - libcontainer container b67d8054dd4e56f140ae8cfb67883843f738dd17b5b5cc8d286b8a5339b9c5d8. Jan 15 05:47:52.995000 audit: BPF prog-id=166 op=LOAD Jan 15 05:47:53.000040 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 15 05:47:53.000132 kernel: audit: type=1334 audit(1768456072.995:553): prog-id=166 op=LOAD Jan 15 05:47:53.000164 kernel: audit: type=1334 audit(1768456072.996:554): prog-id=167 op=LOAD Jan 15 05:47:52.996000 audit: BPF prog-id=167 op=LOAD Jan 15 05:47:53.002470 kernel: audit: type=1300 audit(1768456072.996:554): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3268 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:52.996000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3268 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:52.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236376438303534646434653536663134306165386366623637383833 Jan 15 05:47:53.019247 kernel: audit: type=1327 audit(1768456072.996:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236376438303534646434653536663134306165386366623637383833 Jan 15 05:47:53.019301 kernel: audit: type=1334 audit(1768456072.996:555): prog-id=167 op=UNLOAD Jan 15 05:47:52.996000 audit: BPF prog-id=167 op=UNLOAD Jan 15 05:47:52.996000 audit[3442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3268 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:53.030092 kernel: audit: type=1300 audit(1768456072.996:555): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3268 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:53.030216 kernel: audit: type=1327 audit(1768456072.996:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236376438303534646434653536663134306165386366623637383833 Jan 15 05:47:52.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236376438303534646434653536663134306165386366623637383833 Jan 15 05:47:52.996000 audit: BPF prog-id=168 op=LOAD Jan 15 05:47:52.996000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3268 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:53.051097 kernel: audit: type=1334 audit(1768456072.996:556): prog-id=168 op=LOAD Jan 15 05:47:53.051153 kernel: audit: type=1300 audit(1768456072.996:556): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3268 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:52.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236376438303534646434653536663134306165386366623637383833 Jan 15 05:47:52.996000 audit: BPF prog-id=169 op=LOAD Jan 15 05:47:52.996000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3268 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:52.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236376438303534646434653536663134306165386366623637383833 Jan 15 05:47:52.996000 audit: BPF prog-id=169 op=UNLOAD Jan 15 05:47:52.996000 audit[3442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3268 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:53.060575 kernel: audit: type=1327 audit(1768456072.996:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236376438303534646434653536663134306165386366623637383833 Jan 15 05:47:52.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236376438303534646434653536663134306165386366623637383833 Jan 15 05:47:52.996000 audit: BPF prog-id=168 op=UNLOAD Jan 15 05:47:52.996000 audit[3442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3268 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:52.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236376438303534646434653536663134306165386366623637383833 Jan 15 05:47:52.996000 audit: BPF prog-id=170 op=LOAD Jan 15 05:47:52.996000 audit[3442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3268 pid=3442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:52.996000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236376438303534646434653536663134306165386366623637383833 Jan 15 05:47:53.065979 containerd[1600]: time="2026-01-15T05:47:53.065879552Z" level=info msg="StartContainer for \"b67d8054dd4e56f140ae8cfb67883843f738dd17b5b5cc8d286b8a5339b9c5d8\" returns successfully" Jan 15 05:47:53.148234 kubelet[2767]: E0115 05:47:53.147706 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:47:53.268618 kubelet[2767]: E0115 05:47:53.268575 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:53.290767 kubelet[2767]: I0115 05:47:53.290676 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5447787d76-8xpl2" podStartSLOduration=1.062328303 podStartE2EDuration="2.290663518s" podCreationTimestamp="2026-01-15 05:47:51 +0000 UTC" firstStartedPulling="2026-01-15 05:47:51.663477906 +0000 UTC m=+19.725575300" lastFinishedPulling="2026-01-15 05:47:52.89181312 +0000 UTC m=+20.953910515" observedRunningTime="2026-01-15 05:47:53.289329765 +0000 UTC m=+21.351427160" watchObservedRunningTime="2026-01-15 05:47:53.290663518 +0000 UTC m=+21.352760912" Jan 15 05:47:53.331000 audit[3486]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3486 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:53.331000 audit[3486]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd7943f2f0 a2=0 a3=7ffd7943f2dc items=0 ppid=2920 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:53.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:53.339000 audit[3486]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3486 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:47:53.339000 audit[3486]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd7943f2f0 a2=0 a3=7ffd7943f2dc items=0 ppid=2920 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:53.339000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:47:53.358280 kubelet[2767]: E0115 05:47:53.358190 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.358280 kubelet[2767]: W0115 05:47:53.358225 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.358280 kubelet[2767]: E0115 05:47:53.358242 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.359446 kubelet[2767]: E0115 05:47:53.359394 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.359446 kubelet[2767]: W0115 05:47:53.359425 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.359446 kubelet[2767]: E0115 05:47:53.359437 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.359839 kubelet[2767]: E0115 05:47:53.359765 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.359839 kubelet[2767]: W0115 05:47:53.359789 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.359839 kubelet[2767]: E0115 05:47:53.359798 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.360157 kubelet[2767]: E0115 05:47:53.360116 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.360157 kubelet[2767]: W0115 05:47:53.360141 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.360157 kubelet[2767]: E0115 05:47:53.360151 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.360567 kubelet[2767]: E0115 05:47:53.360481 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.360567 kubelet[2767]: W0115 05:47:53.360546 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.360567 kubelet[2767]: E0115 05:47:53.360561 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.360941 kubelet[2767]: E0115 05:47:53.360863 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.360983 kubelet[2767]: W0115 05:47:53.360958 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.360983 kubelet[2767]: E0115 05:47:53.360968 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.361375 kubelet[2767]: E0115 05:47:53.361301 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.361375 kubelet[2767]: W0115 05:47:53.361324 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.361467 kubelet[2767]: E0115 05:47:53.361337 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.361828 kubelet[2767]: E0115 05:47:53.361771 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.361828 kubelet[2767]: W0115 05:47:53.361794 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.361828 kubelet[2767]: E0115 05:47:53.361803 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.362225 kubelet[2767]: E0115 05:47:53.362126 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.362225 kubelet[2767]: W0115 05:47:53.362149 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.362225 kubelet[2767]: E0115 05:47:53.362158 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.362558 kubelet[2767]: E0115 05:47:53.362509 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.362558 kubelet[2767]: W0115 05:47:53.362529 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.362558 kubelet[2767]: E0115 05:47:53.362537 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.363536 kubelet[2767]: E0115 05:47:53.363447 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.364196 kubelet[2767]: W0115 05:47:53.364164 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.364314 kubelet[2767]: E0115 05:47:53.364202 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.365260 kubelet[2767]: E0115 05:47:53.365234 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.365260 kubelet[2767]: W0115 05:47:53.365257 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.365676 kubelet[2767]: E0115 05:47:53.365271 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.366750 kubelet[2767]: E0115 05:47:53.366698 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.366750 kubelet[2767]: W0115 05:47:53.366726 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.366750 kubelet[2767]: E0115 05:47:53.366742 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.367096 kubelet[2767]: E0115 05:47:53.367052 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.367096 kubelet[2767]: W0115 05:47:53.367062 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.367096 kubelet[2767]: E0115 05:47:53.367072 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.367566 kubelet[2767]: E0115 05:47:53.367462 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.367566 kubelet[2767]: W0115 05:47:53.367489 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.367566 kubelet[2767]: E0115 05:47:53.367499 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.388149 kubelet[2767]: E0115 05:47:53.388080 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.388149 kubelet[2767]: W0115 05:47:53.388127 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.388149 kubelet[2767]: E0115 05:47:53.388146 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.388682 kubelet[2767]: E0115 05:47:53.388653 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.388682 kubelet[2767]: W0115 05:47:53.388684 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.388768 kubelet[2767]: E0115 05:47:53.388697 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.389126 kubelet[2767]: E0115 05:47:53.389093 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.389126 kubelet[2767]: W0115 05:47:53.389116 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.389126 kubelet[2767]: E0115 05:47:53.389127 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.389698 kubelet[2767]: E0115 05:47:53.389672 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.389698 kubelet[2767]: W0115 05:47:53.389693 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.389786 kubelet[2767]: E0115 05:47:53.389705 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.390077 kubelet[2767]: E0115 05:47:53.390062 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.390166 kubelet[2767]: W0115 05:47:53.390137 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.390166 kubelet[2767]: E0115 05:47:53.390153 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.390739 kubelet[2767]: E0115 05:47:53.390700 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.390739 kubelet[2767]: W0115 05:47:53.390714 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.390739 kubelet[2767]: E0115 05:47:53.390723 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.391402 kubelet[2767]: E0115 05:47:53.391303 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.391402 kubelet[2767]: W0115 05:47:53.391331 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.391402 kubelet[2767]: E0115 05:47:53.391391 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.393333 kubelet[2767]: E0115 05:47:53.393314 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.393333 kubelet[2767]: W0115 05:47:53.393332 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.393509 kubelet[2767]: E0115 05:47:53.393422 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.393832 kubelet[2767]: E0115 05:47:53.393771 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.393832 kubelet[2767]: W0115 05:47:53.393797 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.393832 kubelet[2767]: E0115 05:47:53.393807 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.394159 kubelet[2767]: E0115 05:47:53.394121 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.394159 kubelet[2767]: W0115 05:47:53.394142 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.394159 kubelet[2767]: E0115 05:47:53.394152 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.394801 kubelet[2767]: E0115 05:47:53.394659 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.394801 kubelet[2767]: W0115 05:47:53.394671 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.394801 kubelet[2767]: E0115 05:47:53.394680 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.395432 kubelet[2767]: E0115 05:47:53.395313 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.395486 kubelet[2767]: W0115 05:47:53.395458 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.395486 kubelet[2767]: E0115 05:47:53.395471 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.396229 kubelet[2767]: E0115 05:47:53.396129 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.396229 kubelet[2767]: W0115 05:47:53.396213 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.396229 kubelet[2767]: E0115 05:47:53.396223 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.396847 kubelet[2767]: E0115 05:47:53.396815 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.396847 kubelet[2767]: W0115 05:47:53.396837 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.396847 kubelet[2767]: E0115 05:47:53.396846 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.397551 kubelet[2767]: E0115 05:47:53.397518 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.397551 kubelet[2767]: W0115 05:47:53.397542 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.397551 kubelet[2767]: E0115 05:47:53.397552 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.397865 kubelet[2767]: E0115 05:47:53.397827 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.397865 kubelet[2767]: W0115 05:47:53.397850 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.397865 kubelet[2767]: E0115 05:47:53.397860 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.398534 kubelet[2767]: E0115 05:47:53.398428 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.398534 kubelet[2767]: W0115 05:47:53.398472 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.398534 kubelet[2767]: E0115 05:47:53.398487 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.398889 kubelet[2767]: E0115 05:47:53.398857 2767 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 05:47:53.398889 kubelet[2767]: W0115 05:47:53.398890 2767 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 05:47:53.399234 kubelet[2767]: E0115 05:47:53.398964 2767 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 05:47:53.844863 containerd[1600]: time="2026-01-15T05:47:53.844802104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:53.846015 containerd[1600]: time="2026-01-15T05:47:53.845961642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 15 05:47:53.847528 containerd[1600]: time="2026-01-15T05:47:53.847460504Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:53.850421 containerd[1600]: time="2026-01-15T05:47:53.850271152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:53.851107 containerd[1600]: time="2026-01-15T05:47:53.851046776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 959.075869ms" Jan 15 05:47:53.851107 containerd[1600]: time="2026-01-15T05:47:53.851086910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 15 05:47:53.855853 containerd[1600]: time="2026-01-15T05:47:53.855781550Z" level=info msg="CreateContainer within sandbox \"25a35161b2030ffbfb823f7c1fcbae84cc93f625072adf9bcbdeb16510087c95\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 05:47:53.866402 containerd[1600]: time="2026-01-15T05:47:53.866320231Z" level=info msg="Container a200eed9c24e50072983aa81fb5de793e8c30d45163cede6c0797b13ad8e78b0: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:47:53.875791 containerd[1600]: time="2026-01-15T05:47:53.875681075Z" level=info msg="CreateContainer within sandbox \"25a35161b2030ffbfb823f7c1fcbae84cc93f625072adf9bcbdeb16510087c95\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a200eed9c24e50072983aa81fb5de793e8c30d45163cede6c0797b13ad8e78b0\"" Jan 15 05:47:53.876231 containerd[1600]: time="2026-01-15T05:47:53.876100886Z" level=info msg="StartContainer for \"a200eed9c24e50072983aa81fb5de793e8c30d45163cede6c0797b13ad8e78b0\"" Jan 15 05:47:53.877741 containerd[1600]: time="2026-01-15T05:47:53.877650991Z" level=info msg="connecting to shim a200eed9c24e50072983aa81fb5de793e8c30d45163cede6c0797b13ad8e78b0" address="unix:///run/containerd/s/e9633b176eb0a5917646f05107573d97bf0a12282bd49ea8526088552f7023f8" protocol=ttrpc version=3 Jan 15 05:47:53.917540 systemd[1]: Started cri-containerd-a200eed9c24e50072983aa81fb5de793e8c30d45163cede6c0797b13ad8e78b0.scope - libcontainer container a200eed9c24e50072983aa81fb5de793e8c30d45163cede6c0797b13ad8e78b0. Jan 15 05:47:53.992000 audit: BPF prog-id=171 op=LOAD Jan 15 05:47:53.992000 audit[3524]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3366 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:53.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303065656439633234653530303732393833616138316662356465 Jan 15 05:47:53.992000 audit: BPF prog-id=172 op=LOAD Jan 15 05:47:53.992000 audit[3524]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3366 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:53.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303065656439633234653530303732393833616138316662356465 Jan 15 05:47:53.992000 audit: BPF prog-id=172 op=UNLOAD Jan 15 05:47:53.992000 audit[3524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:53.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303065656439633234653530303732393833616138316662356465 Jan 15 05:47:53.992000 audit: BPF prog-id=171 op=UNLOAD Jan 15 05:47:53.992000 audit[3524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:53.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303065656439633234653530303732393833616138316662356465 Jan 15 05:47:53.992000 audit: BPF prog-id=173 op=LOAD Jan 15 05:47:53.992000 audit[3524]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3366 pid=3524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:53.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132303065656439633234653530303732393833616138316662356465 Jan 15 05:47:54.031202 containerd[1600]: time="2026-01-15T05:47:54.031117852Z" level=info msg="StartContainer for \"a200eed9c24e50072983aa81fb5de793e8c30d45163cede6c0797b13ad8e78b0\" returns successfully" Jan 15 05:47:54.073861 systemd[1]: cri-containerd-a200eed9c24e50072983aa81fb5de793e8c30d45163cede6c0797b13ad8e78b0.scope: Deactivated successfully. Jan 15 05:47:54.078250 containerd[1600]: time="2026-01-15T05:47:54.078191379Z" level=info msg="received container exit event container_id:\"a200eed9c24e50072983aa81fb5de793e8c30d45163cede6c0797b13ad8e78b0\" id:\"a200eed9c24e50072983aa81fb5de793e8c30d45163cede6c0797b13ad8e78b0\" pid:3537 exited_at:{seconds:1768456074 nanos:77812653}" Jan 15 05:47:54.079000 audit: BPF prog-id=173 op=UNLOAD Jan 15 05:47:54.111687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a200eed9c24e50072983aa81fb5de793e8c30d45163cede6c0797b13ad8e78b0-rootfs.mount: Deactivated successfully. Jan 15 05:47:54.268867 kubelet[2767]: E0115 05:47:54.268679 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:54.268867 kubelet[2767]: E0115 05:47:54.268812 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:54.271107 containerd[1600]: time="2026-01-15T05:47:54.270995298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 15 05:47:55.145647 kubelet[2767]: E0115 05:47:55.145540 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:47:55.805201 containerd[1600]: time="2026-01-15T05:47:55.805124345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:55.806180 containerd[1600]: time="2026-01-15T05:47:55.806141579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 15 05:47:55.807404 containerd[1600]: time="2026-01-15T05:47:55.807315144Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:55.810262 containerd[1600]: time="2026-01-15T05:47:55.810193582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:47:55.810998 containerd[1600]: time="2026-01-15T05:47:55.810900539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.539842885s" Jan 15 05:47:55.810998 containerd[1600]: time="2026-01-15T05:47:55.810971711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 15 05:47:55.816484 containerd[1600]: time="2026-01-15T05:47:55.816421795Z" level=info msg="CreateContainer within sandbox \"25a35161b2030ffbfb823f7c1fcbae84cc93f625072adf9bcbdeb16510087c95\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 05:47:55.826569 containerd[1600]: time="2026-01-15T05:47:55.826523306Z" level=info msg="Container 26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:47:55.845441 containerd[1600]: time="2026-01-15T05:47:55.843420795Z" level=info msg="CreateContainer within sandbox \"25a35161b2030ffbfb823f7c1fcbae84cc93f625072adf9bcbdeb16510087c95\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa\"" Jan 15 05:47:55.845609 containerd[1600]: time="2026-01-15T05:47:55.845555920Z" level=info msg="StartContainer for \"26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa\"" Jan 15 05:47:55.846957 containerd[1600]: time="2026-01-15T05:47:55.846895003Z" level=info msg="connecting to shim 26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa" address="unix:///run/containerd/s/e9633b176eb0a5917646f05107573d97bf0a12282bd49ea8526088552f7023f8" protocol=ttrpc version=3 Jan 15 05:47:55.886536 systemd[1]: Started cri-containerd-26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa.scope - libcontainer container 26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa. Jan 15 05:47:55.962000 audit: BPF prog-id=174 op=LOAD Jan 15 05:47:55.962000 audit[3583]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3366 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:55.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623865303037613366396364333764643032333365653864353337 Jan 15 05:47:55.962000 audit: BPF prog-id=175 op=LOAD Jan 15 05:47:55.962000 audit[3583]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3366 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:55.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623865303037613366396364333764643032333365653864353337 Jan 15 05:47:55.962000 audit: BPF prog-id=175 op=UNLOAD Jan 15 05:47:55.962000 audit[3583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:55.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623865303037613366396364333764643032333365653864353337 Jan 15 05:47:55.962000 audit: BPF prog-id=174 op=UNLOAD Jan 15 05:47:55.962000 audit[3583]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:55.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623865303037613366396364333764643032333365653864353337 Jan 15 05:47:55.962000 audit: BPF prog-id=176 op=LOAD Jan 15 05:47:55.962000 audit[3583]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3366 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:47:55.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236623865303037613366396364333764643032333365653864353337 Jan 15 05:47:55.987708 containerd[1600]: time="2026-01-15T05:47:55.987624452Z" level=info msg="StartContainer for \"26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa\" returns successfully" Jan 15 05:47:56.277596 kubelet[2767]: E0115 05:47:56.277024 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:57.147426 kubelet[2767]: E0115 05:47:57.146596 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:47:57.156282 systemd[1]: cri-containerd-26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa.scope: Deactivated successfully. Jan 15 05:47:57.157218 systemd[1]: cri-containerd-26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa.scope: Consumed 1.058s CPU time, 173.7M memory peak, 3.8M read from disk, 171.3M written to disk. Jan 15 05:47:57.160000 audit: BPF prog-id=176 op=UNLOAD Jan 15 05:47:57.161680 containerd[1600]: time="2026-01-15T05:47:57.161638828Z" level=info msg="received container exit event container_id:\"26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa\" id:\"26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa\" pid:3596 exited_at:{seconds:1768456077 nanos:160123948}" Jan 15 05:47:57.237156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26b8e007a3f9cd37dd0233ee8d5378afc3a7bd8f376257f51879227d28cc38fa-rootfs.mount: Deactivated successfully. Jan 15 05:47:57.241971 kubelet[2767]: I0115 05:47:57.240195 2767 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 15 05:47:57.280284 kubelet[2767]: E0115 05:47:57.280245 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:57.370164 systemd[1]: Created slice kubepods-besteffort-podd5840778_1f77_4d10_b841_e5e49eaeab2c.slice - libcontainer container kubepods-besteffort-podd5840778_1f77_4d10_b841_e5e49eaeab2c.slice. Jan 15 05:47:57.391986 systemd[1]: Created slice kubepods-burstable-pod20e98906_80c1_467c_b4a1_25e2d8442e54.slice - libcontainer container kubepods-burstable-pod20e98906_80c1_467c_b4a1_25e2d8442e54.slice. Jan 15 05:47:57.417804 systemd[1]: Created slice kubepods-burstable-pod0e9df5a5_f471_4333_a2bf_d06852202d61.slice - libcontainer container kubepods-burstable-pod0e9df5a5_f471_4333_a2bf_d06852202d61.slice. Jan 15 05:47:57.434506 systemd[1]: Created slice kubepods-besteffort-pod10bb0e11_4f57_4c87_8485_4dadf3148ce0.slice - libcontainer container kubepods-besteffort-pod10bb0e11_4f57_4c87_8485_4dadf3148ce0.slice. Jan 15 05:47:57.434909 kubelet[2767]: I0115 05:47:57.432905 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5840778-1f77-4d10-b841-e5e49eaeab2c-whisker-ca-bundle\") pod \"whisker-f4f84fcd8-c5mjh\" (UID: \"d5840778-1f77-4d10-b841-e5e49eaeab2c\") " pod="calico-system/whisker-f4f84fcd8-c5mjh" Jan 15 05:47:57.434909 kubelet[2767]: I0115 05:47:57.434640 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e9df5a5-f471-4333-a2bf-d06852202d61-config-volume\") pod \"coredns-674b8bbfcf-pxxt5\" (UID: \"0e9df5a5-f471-4333-a2bf-d06852202d61\") " pod="kube-system/coredns-674b8bbfcf-pxxt5" Jan 15 05:47:57.434909 kubelet[2767]: I0115 05:47:57.434674 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjr8q\" (UniqueName: \"kubernetes.io/projected/0e9df5a5-f471-4333-a2bf-d06852202d61-kube-api-access-vjr8q\") pod \"coredns-674b8bbfcf-pxxt5\" (UID: \"0e9df5a5-f471-4333-a2bf-d06852202d61\") " pod="kube-system/coredns-674b8bbfcf-pxxt5" Jan 15 05:47:57.434909 kubelet[2767]: I0115 05:47:57.434702 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6bnr\" (UniqueName: \"kubernetes.io/projected/6709b21e-b302-4c1a-b8f9-4c50d880ff8b-kube-api-access-m6bnr\") pod \"calico-apiserver-6988569c94-rjpxd\" (UID: \"6709b21e-b302-4c1a-b8f9-4c50d880ff8b\") " pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" Jan 15 05:47:57.434909 kubelet[2767]: I0115 05:47:57.434730 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10bb0e11-4f57-4c87-8485-4dadf3148ce0-tigera-ca-bundle\") pod \"calico-kube-controllers-bf57cd658-gnjlw\" (UID: \"10bb0e11-4f57-4c87-8485-4dadf3148ce0\") " pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" Jan 15 05:47:57.435197 kubelet[2767]: I0115 05:47:57.434756 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a790238-cc0e-45da-b99f-c6adf406e452-goldmane-ca-bundle\") pod \"goldmane-666569f655-s4fx8\" (UID: \"1a790238-cc0e-45da-b99f-c6adf406e452\") " pod="calico-system/goldmane-666569f655-s4fx8" Jan 15 05:47:57.435197 kubelet[2767]: I0115 05:47:57.434777 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1a790238-cc0e-45da-b99f-c6adf406e452-goldmane-key-pair\") pod \"goldmane-666569f655-s4fx8\" (UID: \"1a790238-cc0e-45da-b99f-c6adf406e452\") " pod="calico-system/goldmane-666569f655-s4fx8" Jan 15 05:47:57.435197 kubelet[2767]: I0115 05:47:57.434802 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgqhz\" (UniqueName: \"kubernetes.io/projected/20e98906-80c1-467c-b4a1-25e2d8442e54-kube-api-access-cgqhz\") pod \"coredns-674b8bbfcf-s9s7c\" (UID: \"20e98906-80c1-467c-b4a1-25e2d8442e54\") " pod="kube-system/coredns-674b8bbfcf-s9s7c" Jan 15 05:47:57.435197 kubelet[2767]: I0115 05:47:57.434824 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a790238-cc0e-45da-b99f-c6adf406e452-config\") pod \"goldmane-666569f655-s4fx8\" (UID: \"1a790238-cc0e-45da-b99f-c6adf406e452\") " pod="calico-system/goldmane-666569f655-s4fx8" Jan 15 05:47:57.435197 kubelet[2767]: I0115 05:47:57.434858 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwhb9\" (UniqueName: \"kubernetes.io/projected/d5840778-1f77-4d10-b841-e5e49eaeab2c-kube-api-access-pwhb9\") pod \"whisker-f4f84fcd8-c5mjh\" (UID: \"d5840778-1f77-4d10-b841-e5e49eaeab2c\") " pod="calico-system/whisker-f4f84fcd8-c5mjh" Jan 15 05:47:57.435688 kubelet[2767]: I0115 05:47:57.434879 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stvc8\" (UniqueName: \"kubernetes.io/projected/6b6565ef-726d-4164-a834-22cf5e5bfe9a-kube-api-access-stvc8\") pod \"calico-apiserver-6988569c94-7xzb7\" (UID: \"6b6565ef-726d-4164-a834-22cf5e5bfe9a\") " pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" Jan 15 05:47:57.435688 kubelet[2767]: I0115 05:47:57.434907 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gpmr\" (UniqueName: \"kubernetes.io/projected/1a790238-cc0e-45da-b99f-c6adf406e452-kube-api-access-2gpmr\") pod \"goldmane-666569f655-s4fx8\" (UID: \"1a790238-cc0e-45da-b99f-c6adf406e452\") " pod="calico-system/goldmane-666569f655-s4fx8" Jan 15 05:47:57.435688 kubelet[2767]: I0115 05:47:57.435023 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6b6565ef-726d-4164-a834-22cf5e5bfe9a-calico-apiserver-certs\") pod \"calico-apiserver-6988569c94-7xzb7\" (UID: \"6b6565ef-726d-4164-a834-22cf5e5bfe9a\") " pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" Jan 15 05:47:57.435688 kubelet[2767]: I0115 05:47:57.435052 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blp4n\" (UniqueName: \"kubernetes.io/projected/10bb0e11-4f57-4c87-8485-4dadf3148ce0-kube-api-access-blp4n\") pod \"calico-kube-controllers-bf57cd658-gnjlw\" (UID: \"10bb0e11-4f57-4c87-8485-4dadf3148ce0\") " pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" Jan 15 05:47:57.435688 kubelet[2767]: I0115 05:47:57.435078 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5840778-1f77-4d10-b841-e5e49eaeab2c-whisker-backend-key-pair\") pod \"whisker-f4f84fcd8-c5mjh\" (UID: \"d5840778-1f77-4d10-b841-e5e49eaeab2c\") " pod="calico-system/whisker-f4f84fcd8-c5mjh" Jan 15 05:47:57.435896 kubelet[2767]: I0115 05:47:57.435100 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6709b21e-b302-4c1a-b8f9-4c50d880ff8b-calico-apiserver-certs\") pod \"calico-apiserver-6988569c94-rjpxd\" (UID: \"6709b21e-b302-4c1a-b8f9-4c50d880ff8b\") " pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" Jan 15 05:47:57.435896 kubelet[2767]: I0115 05:47:57.435128 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20e98906-80c1-467c-b4a1-25e2d8442e54-config-volume\") pod \"coredns-674b8bbfcf-s9s7c\" (UID: \"20e98906-80c1-467c-b4a1-25e2d8442e54\") " pod="kube-system/coredns-674b8bbfcf-s9s7c" Jan 15 05:47:57.446462 systemd[1]: Created slice kubepods-besteffort-pod1a790238_cc0e_45da_b99f_c6adf406e452.slice - libcontainer container kubepods-besteffort-pod1a790238_cc0e_45da_b99f_c6adf406e452.slice. Jan 15 05:47:57.454820 systemd[1]: Created slice kubepods-besteffort-pod6b6565ef_726d_4164_a834_22cf5e5bfe9a.slice - libcontainer container kubepods-besteffort-pod6b6565ef_726d_4164_a834_22cf5e5bfe9a.slice. Jan 15 05:47:57.463666 systemd[1]: Created slice kubepods-besteffort-pod6709b21e_b302_4c1a_b8f9_4c50d880ff8b.slice - libcontainer container kubepods-besteffort-pod6709b21e_b302_4c1a_b8f9_4c50d880ff8b.slice. Jan 15 05:47:57.683420 containerd[1600]: time="2026-01-15T05:47:57.683195766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f4f84fcd8-c5mjh,Uid:d5840778-1f77-4d10-b841-e5e49eaeab2c,Namespace:calico-system,Attempt:0,}" Jan 15 05:47:57.706715 kubelet[2767]: E0115 05:47:57.706045 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:57.709059 containerd[1600]: time="2026-01-15T05:47:57.708972294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s9s7c,Uid:20e98906-80c1-467c-b4a1-25e2d8442e54,Namespace:kube-system,Attempt:0,}" Jan 15 05:47:57.727619 kubelet[2767]: E0115 05:47:57.727547 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:57.733313 containerd[1600]: time="2026-01-15T05:47:57.733212552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxxt5,Uid:0e9df5a5-f471-4333-a2bf-d06852202d61,Namespace:kube-system,Attempt:0,}" Jan 15 05:47:57.740484 containerd[1600]: time="2026-01-15T05:47:57.740263681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bf57cd658-gnjlw,Uid:10bb0e11-4f57-4c87-8485-4dadf3148ce0,Namespace:calico-system,Attempt:0,}" Jan 15 05:47:57.765711 containerd[1600]: time="2026-01-15T05:47:57.765618128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6988569c94-7xzb7,Uid:6b6565ef-726d-4164-a834-22cf5e5bfe9a,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:47:57.766296 containerd[1600]: time="2026-01-15T05:47:57.766231609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s4fx8,Uid:1a790238-cc0e-45da-b99f-c6adf406e452,Namespace:calico-system,Attempt:0,}" Jan 15 05:47:57.774520 containerd[1600]: time="2026-01-15T05:47:57.774404963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6988569c94-rjpxd,Uid:6709b21e-b302-4c1a-b8f9-4c50d880ff8b,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:47:57.984337 containerd[1600]: time="2026-01-15T05:47:57.983966700Z" level=error msg="Failed to destroy network for sandbox \"82c9130d41bf9dd1efe0ba64494b2caa6c5fb7527acabae8e9598963e012929c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:57.989404 containerd[1600]: time="2026-01-15T05:47:57.989105660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6988569c94-rjpxd,Uid:6709b21e-b302-4c1a-b8f9-4c50d880ff8b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c9130d41bf9dd1efe0ba64494b2caa6c5fb7527acabae8e9598963e012929c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:57.989915 kubelet[2767]: E0115 05:47:57.989328 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c9130d41bf9dd1efe0ba64494b2caa6c5fb7527acabae8e9598963e012929c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:57.989915 kubelet[2767]: E0115 05:47:57.989445 2767 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c9130d41bf9dd1efe0ba64494b2caa6c5fb7527acabae8e9598963e012929c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" Jan 15 05:47:57.989915 kubelet[2767]: E0115 05:47:57.989467 2767 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c9130d41bf9dd1efe0ba64494b2caa6c5fb7527acabae8e9598963e012929c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" Jan 15 05:47:57.990429 kubelet[2767]: E0115 05:47:57.989543 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6988569c94-rjpxd_calico-apiserver(6709b21e-b302-4c1a-b8f9-4c50d880ff8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6988569c94-rjpxd_calico-apiserver(6709b21e-b302-4c1a-b8f9-4c50d880ff8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82c9130d41bf9dd1efe0ba64494b2caa6c5fb7527acabae8e9598963e012929c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:47:57.995857 containerd[1600]: time="2026-01-15T05:47:57.995817549Z" level=error msg="Failed to destroy network for sandbox \"fa21dd0ab11bd4ae07024bd14ba7249de60b6074c972565600fb09714ea23aa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:57.996423 containerd[1600]: time="2026-01-15T05:47:57.996208557Z" level=error msg="Failed to destroy network for sandbox \"27f98319712fa16fd53ecae0445ff64de8a5080e6009ad27fc9d41633d5a6a0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.006850 containerd[1600]: time="2026-01-15T05:47:58.006738158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f4f84fcd8-c5mjh,Uid:d5840778-1f77-4d10-b841-e5e49eaeab2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa21dd0ab11bd4ae07024bd14ba7249de60b6074c972565600fb09714ea23aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.007181 kubelet[2767]: E0115 05:47:58.007115 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa21dd0ab11bd4ae07024bd14ba7249de60b6074c972565600fb09714ea23aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.007234 kubelet[2767]: E0115 05:47:58.007187 2767 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa21dd0ab11bd4ae07024bd14ba7249de60b6074c972565600fb09714ea23aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f4f84fcd8-c5mjh" Jan 15 05:47:58.007234 kubelet[2767]: E0115 05:47:58.007208 2767 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa21dd0ab11bd4ae07024bd14ba7249de60b6074c972565600fb09714ea23aa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f4f84fcd8-c5mjh" Jan 15 05:47:58.007283 kubelet[2767]: E0115 05:47:58.007249 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f4f84fcd8-c5mjh_calico-system(d5840778-1f77-4d10-b841-e5e49eaeab2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f4f84fcd8-c5mjh_calico-system(d5840778-1f77-4d10-b841-e5e49eaeab2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa21dd0ab11bd4ae07024bd14ba7249de60b6074c972565600fb09714ea23aa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f4f84fcd8-c5mjh" podUID="d5840778-1f77-4d10-b841-e5e49eaeab2c" Jan 15 05:47:58.012401 containerd[1600]: time="2026-01-15T05:47:58.011431790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s9s7c,Uid:20e98906-80c1-467c-b4a1-25e2d8442e54,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27f98319712fa16fd53ecae0445ff64de8a5080e6009ad27fc9d41633d5a6a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.013255 kubelet[2767]: E0115 05:47:58.012297 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27f98319712fa16fd53ecae0445ff64de8a5080e6009ad27fc9d41633d5a6a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.013255 kubelet[2767]: E0115 05:47:58.012686 2767 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27f98319712fa16fd53ecae0445ff64de8a5080e6009ad27fc9d41633d5a6a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s9s7c" Jan 15 05:47:58.013255 kubelet[2767]: E0115 05:47:58.012707 2767 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27f98319712fa16fd53ecae0445ff64de8a5080e6009ad27fc9d41633d5a6a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s9s7c" Jan 15 05:47:58.013903 kubelet[2767]: E0115 05:47:58.013550 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-s9s7c_kube-system(20e98906-80c1-467c-b4a1-25e2d8442e54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-s9s7c_kube-system(20e98906-80c1-467c-b4a1-25e2d8442e54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27f98319712fa16fd53ecae0445ff64de8a5080e6009ad27fc9d41633d5a6a0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-s9s7c" podUID="20e98906-80c1-467c-b4a1-25e2d8442e54" Jan 15 05:47:58.024016 containerd[1600]: time="2026-01-15T05:47:58.023957991Z" level=error msg="Failed to destroy network for sandbox \"2f34e6e077f5bcfdef765c25be664903b7657f01b5273bb3a0b2cab2658406f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.028683 containerd[1600]: time="2026-01-15T05:47:58.028649079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxxt5,Uid:0e9df5a5-f471-4333-a2bf-d06852202d61,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f34e6e077f5bcfdef765c25be664903b7657f01b5273bb3a0b2cab2658406f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.029053 kubelet[2767]: E0115 05:47:58.029024 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f34e6e077f5bcfdef765c25be664903b7657f01b5273bb3a0b2cab2658406f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.029179 kubelet[2767]: E0115 05:47:58.029147 2767 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f34e6e077f5bcfdef765c25be664903b7657f01b5273bb3a0b2cab2658406f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pxxt5" Jan 15 05:47:58.029280 kubelet[2767]: E0115 05:47:58.029240 2767 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f34e6e077f5bcfdef765c25be664903b7657f01b5273bb3a0b2cab2658406f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pxxt5" Jan 15 05:47:58.030037 kubelet[2767]: E0115 05:47:58.029610 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pxxt5_kube-system(0e9df5a5-f471-4333-a2bf-d06852202d61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pxxt5_kube-system(0e9df5a5-f471-4333-a2bf-d06852202d61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f34e6e077f5bcfdef765c25be664903b7657f01b5273bb3a0b2cab2658406f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pxxt5" podUID="0e9df5a5-f471-4333-a2bf-d06852202d61" Jan 15 05:47:58.031588 containerd[1600]: time="2026-01-15T05:47:58.031520238Z" level=error msg="Failed to destroy network for sandbox \"91d42e43d8040e79a27f4179c44f77b197603ec6823aadb95fda6f52fc7c9e68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.035088 containerd[1600]: time="2026-01-15T05:47:58.034993063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bf57cd658-gnjlw,Uid:10bb0e11-4f57-4c87-8485-4dadf3148ce0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d42e43d8040e79a27f4179c44f77b197603ec6823aadb95fda6f52fc7c9e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.035327 kubelet[2767]: E0115 05:47:58.035227 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d42e43d8040e79a27f4179c44f77b197603ec6823aadb95fda6f52fc7c9e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.035327 kubelet[2767]: E0115 05:47:58.035306 2767 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d42e43d8040e79a27f4179c44f77b197603ec6823aadb95fda6f52fc7c9e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" Jan 15 05:47:58.035533 kubelet[2767]: E0115 05:47:58.035329 2767 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d42e43d8040e79a27f4179c44f77b197603ec6823aadb95fda6f52fc7c9e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" Jan 15 05:47:58.035533 kubelet[2767]: E0115 05:47:58.035419 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bf57cd658-gnjlw_calico-system(10bb0e11-4f57-4c87-8485-4dadf3148ce0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bf57cd658-gnjlw_calico-system(10bb0e11-4f57-4c87-8485-4dadf3148ce0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91d42e43d8040e79a27f4179c44f77b197603ec6823aadb95fda6f52fc7c9e68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:47:58.036908 containerd[1600]: time="2026-01-15T05:47:58.036863611Z" level=error msg="Failed to destroy network for sandbox \"6bce9dcf8eeb4ca834490c0506c04327b189cff8d9d745bc9d6ec4f7f0c5c328\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.041830 containerd[1600]: time="2026-01-15T05:47:58.041796128Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s4fx8,Uid:1a790238-cc0e-45da-b99f-c6adf406e452,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bce9dcf8eeb4ca834490c0506c04327b189cff8d9d745bc9d6ec4f7f0c5c328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.042159 kubelet[2767]: E0115 05:47:58.042072 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bce9dcf8eeb4ca834490c0506c04327b189cff8d9d745bc9d6ec4f7f0c5c328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.042254 kubelet[2767]: E0115 05:47:58.042178 2767 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bce9dcf8eeb4ca834490c0506c04327b189cff8d9d745bc9d6ec4f7f0c5c328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s4fx8" Jan 15 05:47:58.042254 kubelet[2767]: E0115 05:47:58.042203 2767 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bce9dcf8eeb4ca834490c0506c04327b189cff8d9d745bc9d6ec4f7f0c5c328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-s4fx8" Jan 15 05:47:58.042323 kubelet[2767]: E0115 05:47:58.042258 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-s4fx8_calico-system(1a790238-cc0e-45da-b99f-c6adf406e452)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-s4fx8_calico-system(1a790238-cc0e-45da-b99f-c6adf406e452)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bce9dcf8eeb4ca834490c0506c04327b189cff8d9d745bc9d6ec4f7f0c5c328\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:47:58.044322 containerd[1600]: time="2026-01-15T05:47:58.044262501Z" level=error msg="Failed to destroy network for sandbox \"1b57b41c74d603d189433380675c7c7821ba92c856de2ef574be65b8e111f32b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.047121 containerd[1600]: time="2026-01-15T05:47:58.047017161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6988569c94-7xzb7,Uid:6b6565ef-726d-4164-a834-22cf5e5bfe9a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b57b41c74d603d189433380675c7c7821ba92c856de2ef574be65b8e111f32b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.047228 kubelet[2767]: E0115 05:47:58.047193 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b57b41c74d603d189433380675c7c7821ba92c856de2ef574be65b8e111f32b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:58.047283 kubelet[2767]: E0115 05:47:58.047230 2767 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b57b41c74d603d189433380675c7c7821ba92c856de2ef574be65b8e111f32b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" Jan 15 05:47:58.047283 kubelet[2767]: E0115 05:47:58.047249 2767 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b57b41c74d603d189433380675c7c7821ba92c856de2ef574be65b8e111f32b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" Jan 15 05:47:58.047425 kubelet[2767]: E0115 05:47:58.047310 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6988569c94-7xzb7_calico-apiserver(6b6565ef-726d-4164-a834-22cf5e5bfe9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6988569c94-7xzb7_calico-apiserver(6b6565ef-726d-4164-a834-22cf5e5bfe9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b57b41c74d603d189433380675c7c7821ba92c856de2ef574be65b8e111f32b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a" Jan 15 05:47:58.287007 kubelet[2767]: E0115 05:47:58.286820 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:47:58.288411 containerd[1600]: time="2026-01-15T05:47:58.287725922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 15 05:47:59.156884 systemd[1]: Created slice kubepods-besteffort-podcdab6cdb_eee3_4132_9980_23cedc6f5612.slice - libcontainer container kubepods-besteffort-podcdab6cdb_eee3_4132_9980_23cedc6f5612.slice. Jan 15 05:47:59.160458 containerd[1600]: time="2026-01-15T05:47:59.160406369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dt9mp,Uid:cdab6cdb-eee3-4132-9980-23cedc6f5612,Namespace:calico-system,Attempt:0,}" Jan 15 05:47:59.235058 containerd[1600]: time="2026-01-15T05:47:59.234982891Z" level=error msg="Failed to destroy network for sandbox \"58b3057912844b715bdb0e9475bb1785e2c9d6a72dc528ea74cc5aa42a443fbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:59.237567 containerd[1600]: time="2026-01-15T05:47:59.237480553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dt9mp,Uid:cdab6cdb-eee3-4132-9980-23cedc6f5612,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b3057912844b715bdb0e9475bb1785e2c9d6a72dc528ea74cc5aa42a443fbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:59.237766 kubelet[2767]: E0115 05:47:59.237726 2767 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b3057912844b715bdb0e9475bb1785e2c9d6a72dc528ea74cc5aa42a443fbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 05:47:59.237795 systemd[1]: run-netns-cni\x2daaf0dc99\x2dce34\x2da9f0\x2dbd38\x2d78a7927d1550.mount: Deactivated successfully. Jan 15 05:47:59.237897 kubelet[2767]: E0115 05:47:59.237845 2767 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b3057912844b715bdb0e9475bb1785e2c9d6a72dc528ea74cc5aa42a443fbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dt9mp" Jan 15 05:47:59.237897 kubelet[2767]: E0115 05:47:59.237870 2767 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b3057912844b715bdb0e9475bb1785e2c9d6a72dc528ea74cc5aa42a443fbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dt9mp" Jan 15 05:47:59.237993 kubelet[2767]: E0115 05:47:59.237912 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dt9mp_calico-system(cdab6cdb-eee3-4132-9980-23cedc6f5612)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dt9mp_calico-system(cdab6cdb-eee3-4132-9980-23cedc6f5612)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58b3057912844b715bdb0e9475bb1785e2c9d6a72dc528ea74cc5aa42a443fbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:48:04.265076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3392631967.mount: Deactivated successfully. Jan 15 05:48:04.433749 containerd[1600]: time="2026-01-15T05:48:04.433533537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:48:04.435042 containerd[1600]: time="2026-01-15T05:48:04.434731274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 15 05:48:04.436336 containerd[1600]: time="2026-01-15T05:48:04.436247761Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:48:04.439116 containerd[1600]: time="2026-01-15T05:48:04.439023695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 05:48:04.440004 containerd[1600]: time="2026-01-15T05:48:04.439921543Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.152146709s" Jan 15 05:48:04.440135 containerd[1600]: time="2026-01-15T05:48:04.440008676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 15 05:48:04.464275 containerd[1600]: time="2026-01-15T05:48:04.464242022Z" level=info msg="CreateContainer within sandbox \"25a35161b2030ffbfb823f7c1fcbae84cc93f625072adf9bcbdeb16510087c95\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 05:48:04.480858 containerd[1600]: time="2026-01-15T05:48:04.480763311Z" level=info msg="Container 5282d56ff19e3300d945849a36ebf8792565a8ef841f1eb3c30e90eed4a8c129: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:48:04.493633 containerd[1600]: time="2026-01-15T05:48:04.493533602Z" level=info msg="CreateContainer within sandbox \"25a35161b2030ffbfb823f7c1fcbae84cc93f625072adf9bcbdeb16510087c95\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5282d56ff19e3300d945849a36ebf8792565a8ef841f1eb3c30e90eed4a8c129\"" Jan 15 05:48:04.494489 containerd[1600]: time="2026-01-15T05:48:04.494340060Z" level=info msg="StartContainer for \"5282d56ff19e3300d945849a36ebf8792565a8ef841f1eb3c30e90eed4a8c129\"" Jan 15 05:48:04.497550 containerd[1600]: time="2026-01-15T05:48:04.497483529Z" level=info msg="connecting to shim 5282d56ff19e3300d945849a36ebf8792565a8ef841f1eb3c30e90eed4a8c129" address="unix:///run/containerd/s/e9633b176eb0a5917646f05107573d97bf0a12282bd49ea8526088552f7023f8" protocol=ttrpc version=3 Jan 15 05:48:04.540698 systemd[1]: Started cri-containerd-5282d56ff19e3300d945849a36ebf8792565a8ef841f1eb3c30e90eed4a8c129.scope - libcontainer container 5282d56ff19e3300d945849a36ebf8792565a8ef841f1eb3c30e90eed4a8c129. Jan 15 05:48:04.631000 audit: BPF prog-id=177 op=LOAD Jan 15 05:48:04.633772 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 15 05:48:04.633861 kernel: audit: type=1334 audit(1768456084.631:575): prog-id=177 op=LOAD Jan 15 05:48:04.631000 audit[3905]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3366 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:04.646088 kernel: audit: type=1300 audit(1768456084.631:575): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3366 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:04.646148 kernel: audit: type=1327 audit(1768456084.631:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383264353666663139653333303064393435383439613336656266 Jan 15 05:48:04.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383264353666663139653333303064393435383439613336656266 Jan 15 05:48:04.631000 audit: BPF prog-id=178 op=LOAD Jan 15 05:48:04.658400 kernel: audit: type=1334 audit(1768456084.631:576): prog-id=178 op=LOAD Jan 15 05:48:04.658529 kernel: audit: type=1300 audit(1768456084.631:576): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3366 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:04.631000 audit[3905]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3366 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:04.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383264353666663139653333303064393435383439613336656266 Jan 15 05:48:04.683543 kernel: audit: type=1327 audit(1768456084.631:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383264353666663139653333303064393435383439613336656266 Jan 15 05:48:04.683710 kernel: audit: type=1334 audit(1768456084.631:577): prog-id=178 op=UNLOAD Jan 15 05:48:04.631000 audit: BPF prog-id=178 op=UNLOAD Jan 15 05:48:04.631000 audit[3905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:04.701243 kernel: audit: type=1300 audit(1768456084.631:577): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:04.701489 kernel: audit: type=1327 audit(1768456084.631:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383264353666663139653333303064393435383439613336656266 Jan 15 05:48:04.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383264353666663139653333303064393435383439613336656266 Jan 15 05:48:04.717514 kernel: audit: type=1334 audit(1768456084.631:578): prog-id=177 op=UNLOAD Jan 15 05:48:04.631000 audit: BPF prog-id=177 op=UNLOAD Jan 15 05:48:04.631000 audit[3905]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3366 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:04.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383264353666663139653333303064393435383439613336656266 Jan 15 05:48:04.631000 audit: BPF prog-id=179 op=LOAD Jan 15 05:48:04.631000 audit[3905]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3366 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:04.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532383264353666663139653333303064393435383439613336656266 Jan 15 05:48:04.729915 containerd[1600]: time="2026-01-15T05:48:04.729840996Z" level=info msg="StartContainer for \"5282d56ff19e3300d945849a36ebf8792565a8ef841f1eb3c30e90eed4a8c129\" returns successfully" Jan 15 05:48:04.834190 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 05:48:04.834599 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 05:48:05.007948 kubelet[2767]: I0115 05:48:05.007248 2767 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5840778-1f77-4d10-b841-e5e49eaeab2c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d5840778-1f77-4d10-b841-e5e49eaeab2c" (UID: "d5840778-1f77-4d10-b841-e5e49eaeab2c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 05:48:05.007948 kubelet[2767]: I0115 05:48:05.007430 2767 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5840778-1f77-4d10-b841-e5e49eaeab2c-whisker-ca-bundle\") pod \"d5840778-1f77-4d10-b841-e5e49eaeab2c\" (UID: \"d5840778-1f77-4d10-b841-e5e49eaeab2c\") " Jan 15 05:48:05.007948 kubelet[2767]: I0115 05:48:05.007499 2767 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5840778-1f77-4d10-b841-e5e49eaeab2c-whisker-backend-key-pair\") pod \"d5840778-1f77-4d10-b841-e5e49eaeab2c\" (UID: \"d5840778-1f77-4d10-b841-e5e49eaeab2c\") " Jan 15 05:48:05.007948 kubelet[2767]: I0115 05:48:05.007534 2767 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwhb9\" (UniqueName: \"kubernetes.io/projected/d5840778-1f77-4d10-b841-e5e49eaeab2c-kube-api-access-pwhb9\") pod \"d5840778-1f77-4d10-b841-e5e49eaeab2c\" (UID: \"d5840778-1f77-4d10-b841-e5e49eaeab2c\") " Jan 15 05:48:05.007948 kubelet[2767]: I0115 05:48:05.007626 2767 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5840778-1f77-4d10-b841-e5e49eaeab2c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 15 05:48:05.020684 kubelet[2767]: I0115 05:48:05.020595 2767 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5840778-1f77-4d10-b841-e5e49eaeab2c-kube-api-access-pwhb9" (OuterVolumeSpecName: "kube-api-access-pwhb9") pod "d5840778-1f77-4d10-b841-e5e49eaeab2c" (UID: "d5840778-1f77-4d10-b841-e5e49eaeab2c"). InnerVolumeSpecName "kube-api-access-pwhb9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 05:48:05.021145 kubelet[2767]: I0115 05:48:05.021089 2767 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5840778-1f77-4d10-b841-e5e49eaeab2c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d5840778-1f77-4d10-b841-e5e49eaeab2c" (UID: "d5840778-1f77-4d10-b841-e5e49eaeab2c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 15 05:48:05.108780 kubelet[2767]: I0115 05:48:05.108655 2767 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d5840778-1f77-4d10-b841-e5e49eaeab2c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 15 05:48:05.108780 kubelet[2767]: I0115 05:48:05.108689 2767 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pwhb9\" (UniqueName: \"kubernetes.io/projected/d5840778-1f77-4d10-b841-e5e49eaeab2c-kube-api-access-pwhb9\") on node \"localhost\" DevicePath \"\"" Jan 15 05:48:05.266411 systemd[1]: var-lib-kubelet-pods-d5840778\x2d1f77\x2d4d10\x2db841\x2de5e49eaeab2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpwhb9.mount: Deactivated successfully. Jan 15 05:48:05.266544 systemd[1]: var-lib-kubelet-pods-d5840778\x2d1f77\x2d4d10\x2db841\x2de5e49eaeab2c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 15 05:48:05.307504 kubelet[2767]: E0115 05:48:05.307063 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:05.315065 systemd[1]: Removed slice kubepods-besteffort-podd5840778_1f77_4d10_b841_e5e49eaeab2c.slice - libcontainer container kubepods-besteffort-podd5840778_1f77_4d10_b841_e5e49eaeab2c.slice. Jan 15 05:48:06.369653 kubelet[2767]: I0115 05:48:06.369570 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hpwzd" podStartSLOduration=2.771445121 podStartE2EDuration="15.369551756s" podCreationTimestamp="2026-01-15 05:47:51 +0000 UTC" firstStartedPulling="2026-01-15 05:47:51.842818061 +0000 UTC m=+19.904915457" lastFinishedPulling="2026-01-15 05:48:04.440924698 +0000 UTC m=+32.503022092" observedRunningTime="2026-01-15 05:48:06.367475455 +0000 UTC m=+34.429572850" watchObservedRunningTime="2026-01-15 05:48:06.369551756 +0000 UTC m=+34.431649151" Jan 15 05:48:06.387076 systemd[1]: Created slice kubepods-besteffort-pod9a14d1ad_fdab_4109_839d_24ca471bacb8.slice - libcontainer container kubepods-besteffort-pod9a14d1ad_fdab_4109_839d_24ca471bacb8.slice. Jan 15 05:48:06.420841 kubelet[2767]: I0115 05:48:06.420718 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a14d1ad-fdab-4109-839d-24ca471bacb8-whisker-ca-bundle\") pod \"whisker-5d459cf79d-58jzf\" (UID: \"9a14d1ad-fdab-4109-839d-24ca471bacb8\") " pod="calico-system/whisker-5d459cf79d-58jzf" Jan 15 05:48:06.421063 kubelet[2767]: I0115 05:48:06.420914 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a14d1ad-fdab-4109-839d-24ca471bacb8-whisker-backend-key-pair\") pod \"whisker-5d459cf79d-58jzf\" (UID: \"9a14d1ad-fdab-4109-839d-24ca471bacb8\") " pod="calico-system/whisker-5d459cf79d-58jzf" Jan 15 05:48:06.421063 kubelet[2767]: I0115 05:48:06.420936 2767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ncpr\" (UniqueName: \"kubernetes.io/projected/9a14d1ad-fdab-4109-839d-24ca471bacb8-kube-api-access-7ncpr\") pod \"whisker-5d459cf79d-58jzf\" (UID: \"9a14d1ad-fdab-4109-839d-24ca471bacb8\") " pod="calico-system/whisker-5d459cf79d-58jzf" Jan 15 05:48:06.692296 containerd[1600]: time="2026-01-15T05:48:06.692082315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d459cf79d-58jzf,Uid:9a14d1ad-fdab-4109-839d-24ca471bacb8,Namespace:calico-system,Attempt:0,}" Jan 15 05:48:06.961687 systemd-networkd[1507]: calid033cf3ea45: Link UP Jan 15 05:48:06.962055 systemd-networkd[1507]: calid033cf3ea45: Gained carrier Jan 15 05:48:06.981662 containerd[1600]: 2026-01-15 05:48:06.732 [INFO][3974] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 05:48:06.981662 containerd[1600]: 2026-01-15 05:48:06.765 [INFO][3974] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5d459cf79d--58jzf-eth0 whisker-5d459cf79d- calico-system 9a14d1ad-fdab-4109-839d-24ca471bacb8 959 0 2026-01-15 05:48:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d459cf79d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5d459cf79d-58jzf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid033cf3ea45 [] [] }} ContainerID="cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" Namespace="calico-system" Pod="whisker-5d459cf79d-58jzf" WorkloadEndpoint="localhost-k8s-whisker--5d459cf79d--58jzf-" Jan 15 05:48:06.981662 containerd[1600]: 2026-01-15 05:48:06.765 [INFO][3974] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" Namespace="calico-system" Pod="whisker-5d459cf79d-58jzf" WorkloadEndpoint="localhost-k8s-whisker--5d459cf79d--58jzf-eth0" Jan 15 05:48:06.981662 containerd[1600]: 2026-01-15 05:48:06.858 [INFO][3990] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" HandleID="k8s-pod-network.cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" Workload="localhost-k8s-whisker--5d459cf79d--58jzf-eth0" Jan 15 05:48:06.982249 containerd[1600]: 2026-01-15 05:48:06.859 [INFO][3990] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" HandleID="k8s-pod-network.cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" Workload="localhost-k8s-whisker--5d459cf79d--58jzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a1020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5d459cf79d-58jzf", "timestamp":"2026-01-15 05:48:06.858464261 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:48:06.982249 containerd[1600]: 2026-01-15 05:48:06.859 [INFO][3990] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:48:06.982249 containerd[1600]: 2026-01-15 05:48:06.859 [INFO][3990] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:48:06.982249 containerd[1600]: 2026-01-15 05:48:06.860 [INFO][3990] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:48:06.982249 containerd[1600]: 2026-01-15 05:48:06.874 [INFO][3990] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" host="localhost" Jan 15 05:48:06.982249 containerd[1600]: 2026-01-15 05:48:06.892 [INFO][3990] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:48:06.982249 containerd[1600]: 2026-01-15 05:48:06.911 [INFO][3990] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:48:06.982249 containerd[1600]: 2026-01-15 05:48:06.914 [INFO][3990] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:06.982249 containerd[1600]: 2026-01-15 05:48:06.917 [INFO][3990] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:06.982249 containerd[1600]: 2026-01-15 05:48:06.918 [INFO][3990] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" host="localhost" Jan 15 05:48:06.982579 containerd[1600]: 2026-01-15 05:48:06.920 [INFO][3990] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d Jan 15 05:48:06.982579 containerd[1600]: 2026-01-15 05:48:06.929 [INFO][3990] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" host="localhost" Jan 15 05:48:06.982579 containerd[1600]: 2026-01-15 05:48:06.942 [INFO][3990] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" host="localhost" Jan 15 05:48:06.982579 containerd[1600]: 2026-01-15 05:48:06.943 [INFO][3990] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" host="localhost" Jan 15 05:48:06.982579 containerd[1600]: 2026-01-15 05:48:06.943 [INFO][3990] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:48:06.982579 containerd[1600]: 2026-01-15 05:48:06.943 [INFO][3990] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" HandleID="k8s-pod-network.cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" Workload="localhost-k8s-whisker--5d459cf79d--58jzf-eth0" Jan 15 05:48:06.982732 containerd[1600]: 2026-01-15 05:48:06.947 [INFO][3974] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" Namespace="calico-system" Pod="whisker-5d459cf79d-58jzf" WorkloadEndpoint="localhost-k8s-whisker--5d459cf79d--58jzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d459cf79d--58jzf-eth0", GenerateName:"whisker-5d459cf79d-", Namespace:"calico-system", SelfLink:"", UID:"9a14d1ad-fdab-4109-839d-24ca471bacb8", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d459cf79d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5d459cf79d-58jzf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid033cf3ea45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:06.982732 containerd[1600]: 2026-01-15 05:48:06.947 [INFO][3974] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" Namespace="calico-system" Pod="whisker-5d459cf79d-58jzf" WorkloadEndpoint="localhost-k8s-whisker--5d459cf79d--58jzf-eth0" Jan 15 05:48:06.982842 containerd[1600]: 2026-01-15 05:48:06.947 [INFO][3974] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid033cf3ea45 ContainerID="cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" Namespace="calico-system" Pod="whisker-5d459cf79d-58jzf" WorkloadEndpoint="localhost-k8s-whisker--5d459cf79d--58jzf-eth0" Jan 15 05:48:06.982842 containerd[1600]: 2026-01-15 05:48:06.961 [INFO][3974] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" Namespace="calico-system" Pod="whisker-5d459cf79d-58jzf" WorkloadEndpoint="localhost-k8s-whisker--5d459cf79d--58jzf-eth0" Jan 15 05:48:06.982895 containerd[1600]: 2026-01-15 05:48:06.961 [INFO][3974] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" Namespace="calico-system" Pod="whisker-5d459cf79d-58jzf" WorkloadEndpoint="localhost-k8s-whisker--5d459cf79d--58jzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d459cf79d--58jzf-eth0", GenerateName:"whisker-5d459cf79d-", Namespace:"calico-system", SelfLink:"", UID:"9a14d1ad-fdab-4109-839d-24ca471bacb8", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 48, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d459cf79d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d", Pod:"whisker-5d459cf79d-58jzf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid033cf3ea45", MAC:"3a:6b:7d:30:d1:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:06.982975 containerd[1600]: 2026-01-15 05:48:06.976 [INFO][3974] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" Namespace="calico-system" Pod="whisker-5d459cf79d-58jzf" WorkloadEndpoint="localhost-k8s-whisker--5d459cf79d--58jzf-eth0" Jan 15 05:48:07.091848 containerd[1600]: time="2026-01-15T05:48:07.091726574Z" level=info msg="connecting to shim cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d" address="unix:///run/containerd/s/0236f0a64a15c40804a6d899b76b495d6da11253e1b416314cefd5af30e8790f" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:48:07.137617 systemd[1]: Started cri-containerd-cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d.scope - libcontainer container cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d. Jan 15 05:48:07.155000 audit: BPF prog-id=180 op=LOAD Jan 15 05:48:07.156000 audit: BPF prog-id=181 op=LOAD Jan 15 05:48:07.156000 audit[4026]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4015 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365653536613061303337323364656363373938323337313338626264 Jan 15 05:48:07.156000 audit: BPF prog-id=181 op=UNLOAD Jan 15 05:48:07.156000 audit[4026]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4015 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365653536613061303337323364656363373938323337313338626264 Jan 15 05:48:07.156000 audit: BPF prog-id=182 op=LOAD Jan 15 05:48:07.156000 audit[4026]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4015 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365653536613061303337323364656363373938323337313338626264 Jan 15 05:48:07.156000 audit: BPF prog-id=183 op=LOAD Jan 15 05:48:07.156000 audit[4026]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4015 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365653536613061303337323364656363373938323337313338626264 Jan 15 05:48:07.156000 audit: BPF prog-id=183 op=UNLOAD Jan 15 05:48:07.156000 audit[4026]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4015 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365653536613061303337323364656363373938323337313338626264 Jan 15 05:48:07.156000 audit: BPF prog-id=182 op=UNLOAD Jan 15 05:48:07.156000 audit[4026]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4015 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365653536613061303337323364656363373938323337313338626264 Jan 15 05:48:07.156000 audit: BPF prog-id=184 op=LOAD Jan 15 05:48:07.156000 audit[4026]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4015 pid=4026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365653536613061303337323364656363373938323337313338626264 Jan 15 05:48:07.158489 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:48:07.198943 containerd[1600]: time="2026-01-15T05:48:07.198732989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d459cf79d-58jzf,Uid:9a14d1ad-fdab-4109-839d-24ca471bacb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"cee56a0a03723decc798237138bbdf9e1037223ef157e0b4ec142cbd130eb26d\"" Jan 15 05:48:07.207821 containerd[1600]: time="2026-01-15T05:48:07.206673854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 05:48:07.262626 containerd[1600]: time="2026-01-15T05:48:07.262336997Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:07.264115 containerd[1600]: time="2026-01-15T05:48:07.264077612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 05:48:07.264386 containerd[1600]: time="2026-01-15T05:48:07.264112817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:07.264512 kubelet[2767]: E0115 05:48:07.264478 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:48:07.264512 kubelet[2767]: E0115 05:48:07.264527 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:48:07.267272 kubelet[2767]: E0115 05:48:07.267162 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:73cf578236394d7092cc66aa5ac2392b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ncpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d459cf79d-58jzf_calico-system(9a14d1ad-fdab-4109-839d-24ca471bacb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:07.269382 containerd[1600]: time="2026-01-15T05:48:07.269246347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 05:48:07.326879 containerd[1600]: time="2026-01-15T05:48:07.326764695Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:07.328428 containerd[1600]: time="2026-01-15T05:48:07.328202776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 05:48:07.328428 containerd[1600]: time="2026-01-15T05:48:07.328243782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:07.328572 kubelet[2767]: E0115 05:48:07.328464 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:48:07.328572 kubelet[2767]: E0115 05:48:07.328508 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:48:07.328797 kubelet[2767]: E0115 05:48:07.328628 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ncpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d459cf79d-58jzf_calico-system(9a14d1ad-fdab-4109-839d-24ca471bacb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:07.330073 kubelet[2767]: E0115 05:48:07.329909 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d459cf79d-58jzf" podUID="9a14d1ad-fdab-4109-839d-24ca471bacb8" Jan 15 05:48:07.711000 audit: BPF prog-id=185 op=LOAD Jan 15 05:48:07.711000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff2c06d80 a2=98 a3=1fffffffffffffff items=0 ppid=4062 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.711000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:48:07.711000 audit: BPF prog-id=185 op=UNLOAD Jan 15 05:48:07.711000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff2c06d50 a3=0 items=0 ppid=4062 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.711000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:48:07.711000 audit: BPF prog-id=186 op=LOAD Jan 15 05:48:07.711000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff2c06c60 a2=94 a3=3 items=0 ppid=4062 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.711000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:48:07.712000 audit: BPF prog-id=186 op=UNLOAD Jan 15 05:48:07.712000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff2c06c60 a2=94 a3=3 items=0 ppid=4062 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.712000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:48:07.712000 audit: BPF prog-id=187 op=LOAD Jan 15 05:48:07.712000 audit[4179]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff2c06ca0 a2=94 a3=7ffff2c06e80 items=0 ppid=4062 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.712000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:48:07.712000 audit: BPF prog-id=187 op=UNLOAD Jan 15 05:48:07.712000 audit[4179]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffff2c06ca0 a2=94 a3=7ffff2c06e80 items=0 ppid=4062 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.712000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 15 05:48:07.714000 audit: BPF prog-id=188 op=LOAD Jan 15 05:48:07.714000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8b9b61d0 a2=98 a3=3 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.714000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.714000 audit: BPF prog-id=188 op=UNLOAD Jan 15 05:48:07.714000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff8b9b61a0 a3=0 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.714000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.714000 audit: BPF prog-id=189 op=LOAD Jan 15 05:48:07.714000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff8b9b5fc0 a2=94 a3=54428f items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.714000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.714000 audit: BPF prog-id=189 op=UNLOAD Jan 15 05:48:07.714000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff8b9b5fc0 a2=94 a3=54428f items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.714000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.714000 audit: BPF prog-id=190 op=LOAD Jan 15 05:48:07.714000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff8b9b5ff0 a2=94 a3=2 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.714000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.714000 audit: BPF prog-id=190 op=UNLOAD Jan 15 05:48:07.714000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff8b9b5ff0 a2=0 a3=2 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.714000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.935000 audit: BPF prog-id=191 op=LOAD Jan 15 05:48:07.935000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff8b9b5eb0 a2=94 a3=1 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.935000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.936000 audit: BPF prog-id=191 op=UNLOAD Jan 15 05:48:07.936000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff8b9b5eb0 a2=94 a3=1 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.936000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.945000 audit: BPF prog-id=192 op=LOAD Jan 15 05:48:07.945000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff8b9b5ea0 a2=94 a3=4 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.945000 audit: BPF prog-id=192 op=UNLOAD Jan 15 05:48:07.945000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff8b9b5ea0 a2=0 a3=4 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.945000 audit: BPF prog-id=193 op=LOAD Jan 15 05:48:07.945000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8b9b5d00 a2=94 a3=5 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.945000 audit: BPF prog-id=193 op=UNLOAD Jan 15 05:48:07.945000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff8b9b5d00 a2=0 a3=5 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.945000 audit: BPF prog-id=194 op=LOAD Jan 15 05:48:07.945000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff8b9b5f20 a2=94 a3=6 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.945000 audit: BPF prog-id=194 op=UNLOAD Jan 15 05:48:07.945000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff8b9b5f20 a2=0 a3=6 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.945000 audit: BPF prog-id=195 op=LOAD Jan 15 05:48:07.945000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff8b9b56d0 a2=94 a3=88 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.945000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.946000 audit: BPF prog-id=196 op=LOAD Jan 15 05:48:07.946000 audit[4180]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff8b9b5550 a2=94 a3=2 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.946000 audit: BPF prog-id=196 op=UNLOAD Jan 15 05:48:07.946000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff8b9b5580 a2=0 a3=7fff8b9b5680 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.946000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.947000 audit: BPF prog-id=195 op=UNLOAD Jan 15 05:48:07.947000 audit[4180]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=272ddd10 a2=0 a3=3444ed8e9c3f9c1 items=0 ppid=4062 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.947000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 15 05:48:07.960000 audit: BPF prog-id=197 op=LOAD Jan 15 05:48:07.960000 audit[4183]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5e230c30 a2=98 a3=1999999999999999 items=0 ppid=4062 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.960000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:48:07.960000 audit: BPF prog-id=197 op=UNLOAD Jan 15 05:48:07.960000 audit[4183]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff5e230c00 a3=0 items=0 ppid=4062 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.960000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:48:07.960000 audit: BPF prog-id=198 op=LOAD Jan 15 05:48:07.960000 audit[4183]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5e230b10 a2=94 a3=ffff items=0 ppid=4062 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.960000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:48:07.960000 audit: BPF prog-id=198 op=UNLOAD Jan 15 05:48:07.960000 audit[4183]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff5e230b10 a2=94 a3=ffff items=0 ppid=4062 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.960000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:48:07.960000 audit: BPF prog-id=199 op=LOAD Jan 15 05:48:07.960000 audit[4183]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5e230b50 a2=94 a3=7fff5e230d30 items=0 ppid=4062 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.960000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:48:07.960000 audit: BPF prog-id=199 op=UNLOAD Jan 15 05:48:07.960000 audit[4183]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff5e230b50 a2=94 a3=7fff5e230d30 items=0 ppid=4062 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:07.960000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 15 05:48:08.036649 systemd-networkd[1507]: vxlan.calico: Link UP Jan 15 05:48:08.036661 systemd-networkd[1507]: vxlan.calico: Gained carrier Jan 15 05:48:08.079000 audit: BPF prog-id=200 op=LOAD Jan 15 05:48:08.079000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd043d3190 a2=98 a3=0 items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.079000 audit: BPF prog-id=200 op=UNLOAD Jan 15 05:48:08.079000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd043d3160 a3=0 items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.079000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.080000 audit: BPF prog-id=201 op=LOAD Jan 15 05:48:08.080000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd043d2fa0 a2=94 a3=54428f items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.080000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.080000 audit: BPF prog-id=201 op=UNLOAD Jan 15 05:48:08.080000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd043d2fa0 a2=94 a3=54428f items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.080000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.080000 audit: BPF prog-id=202 op=LOAD Jan 15 05:48:08.080000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd043d2fd0 a2=94 a3=2 items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.080000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.080000 audit: BPF prog-id=202 op=UNLOAD Jan 15 05:48:08.080000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd043d2fd0 a2=0 a3=2 items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.080000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.080000 audit: BPF prog-id=203 op=LOAD Jan 15 05:48:08.080000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd043d2d80 a2=94 a3=4 items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.080000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.080000 audit: BPF prog-id=203 op=UNLOAD Jan 15 05:48:08.080000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd043d2d80 a2=94 a3=4 items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.080000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.080000 audit: BPF prog-id=204 op=LOAD Jan 15 05:48:08.080000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd043d2e80 a2=94 a3=7ffd043d3000 items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.080000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.080000 audit: BPF prog-id=204 op=UNLOAD Jan 15 05:48:08.080000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd043d2e80 a2=0 a3=7ffd043d3000 items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.080000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.081000 audit: BPF prog-id=205 op=LOAD Jan 15 05:48:08.081000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd043d25b0 a2=94 a3=2 items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.081000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.081000 audit: BPF prog-id=205 op=UNLOAD Jan 15 05:48:08.081000 audit[4208]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd043d25b0 a2=0 a3=2 items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.081000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.081000 audit: BPF prog-id=206 op=LOAD Jan 15 05:48:08.081000 audit[4208]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd043d26b0 a2=94 a3=30 items=0 ppid=4062 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.081000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 15 05:48:08.092000 audit: BPF prog-id=207 op=LOAD Jan 15 05:48:08.092000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffccf7ecf30 a2=98 a3=0 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.092000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.092000 audit: BPF prog-id=207 op=UNLOAD Jan 15 05:48:08.092000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffccf7ecf00 a3=0 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.092000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.093000 audit: BPF prog-id=208 op=LOAD Jan 15 05:48:08.093000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffccf7ecd20 a2=94 a3=54428f items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.093000 audit: BPF prog-id=208 op=UNLOAD Jan 15 05:48:08.093000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffccf7ecd20 a2=94 a3=54428f items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.093000 audit: BPF prog-id=209 op=LOAD Jan 15 05:48:08.093000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffccf7ecd50 a2=94 a3=2 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.093000 audit: BPF prog-id=209 op=UNLOAD Jan 15 05:48:08.093000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffccf7ecd50 a2=0 a3=2 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.093000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.152218 kubelet[2767]: I0115 05:48:08.152119 2767 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5840778-1f77-4d10-b841-e5e49eaeab2c" path="/var/lib/kubelet/pods/d5840778-1f77-4d10-b841-e5e49eaeab2c/volumes" Jan 15 05:48:08.287000 audit: BPF prog-id=210 op=LOAD Jan 15 05:48:08.287000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffccf7ecc10 a2=94 a3=1 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.287000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.288000 audit: BPF prog-id=210 op=UNLOAD Jan 15 05:48:08.288000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffccf7ecc10 a2=94 a3=1 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.288000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.298000 audit: BPF prog-id=211 op=LOAD Jan 15 05:48:08.298000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffccf7ecc00 a2=94 a3=4 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.298000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.298000 audit: BPF prog-id=211 op=UNLOAD Jan 15 05:48:08.298000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffccf7ecc00 a2=0 a3=4 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.298000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.298000 audit: BPF prog-id=212 op=LOAD Jan 15 05:48:08.298000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffccf7eca60 a2=94 a3=5 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.298000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.298000 audit: BPF prog-id=212 op=UNLOAD Jan 15 05:48:08.298000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffccf7eca60 a2=0 a3=5 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.298000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.298000 audit: BPF prog-id=213 op=LOAD Jan 15 05:48:08.298000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffccf7ecc80 a2=94 a3=6 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.298000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.298000 audit: BPF prog-id=213 op=UNLOAD Jan 15 05:48:08.298000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffccf7ecc80 a2=0 a3=6 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.298000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.299000 audit: BPF prog-id=214 op=LOAD Jan 15 05:48:08.299000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffccf7ec430 a2=94 a3=88 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.299000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.299000 audit: BPF prog-id=215 op=LOAD Jan 15 05:48:08.299000 audit[4216]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffccf7ec2b0 a2=94 a3=2 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.299000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.299000 audit: BPF prog-id=215 op=UNLOAD Jan 15 05:48:08.299000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffccf7ec2e0 a2=0 a3=7ffccf7ec3e0 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.299000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.301000 audit: BPF prog-id=214 op=UNLOAD Jan 15 05:48:08.301000 audit[4216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=400a4d10 a2=0 a3=d28f554be23be791 items=0 ppid=4062 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.301000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 15 05:48:08.310000 audit: BPF prog-id=206 op=UNLOAD Jan 15 05:48:08.310000 audit[4062]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00084a100 a2=0 a3=0 items=0 ppid=4053 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.310000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 15 05:48:08.321590 kubelet[2767]: E0115 05:48:08.321554 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d459cf79d-58jzf" podUID="9a14d1ad-fdab-4109-839d-24ca471bacb8" Jan 15 05:48:08.364000 audit[4229]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:08.364000 audit[4229]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd6ced35f0 a2=0 a3=7ffd6ced35dc items=0 ppid=2920 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:08.370000 audit[4229]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:08.370000 audit[4229]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd6ced35f0 a2=0 a3=0 items=0 ppid=2920 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:08.396000 audit[4244]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4244 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:48:08.396000 audit[4244]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcee07fa10 a2=0 a3=7ffcee07f9fc items=0 ppid=4062 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.396000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:48:08.397000 audit[4245]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4245 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:48:08.397000 audit[4245]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffff96b18c0 a2=0 a3=7ffff96b18ac items=0 ppid=4062 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.397000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:48:08.411000 audit[4242]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4242 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:48:08.411000 audit[4242]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffe203f4f0 a2=0 a3=7fffe203f4dc items=0 ppid=4062 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.411000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:48:08.418000 audit[4247]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4247 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:48:08.418000 audit[4247]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe05b65870 a2=0 a3=7ffe05b6585c items=0 ppid=4062 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:08.418000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:48:09.013711 systemd-networkd[1507]: calid033cf3ea45: Gained IPv6LL Jan 15 05:48:09.146812 containerd[1600]: time="2026-01-15T05:48:09.146757917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bf57cd658-gnjlw,Uid:10bb0e11-4f57-4c87-8485-4dadf3148ce0,Namespace:calico-system,Attempt:0,}" Jan 15 05:48:09.147760 containerd[1600]: time="2026-01-15T05:48:09.146937029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6988569c94-rjpxd,Uid:6709b21e-b302-4c1a-b8f9-4c50d880ff8b,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:48:09.207153 systemd-networkd[1507]: vxlan.calico: Gained IPv6LL Jan 15 05:48:09.319458 systemd-networkd[1507]: caliea6aa0571b5: Link UP Jan 15 05:48:09.320045 systemd-networkd[1507]: caliea6aa0571b5: Gained carrier Jan 15 05:48:09.346487 containerd[1600]: 2026-01-15 05:48:09.206 [INFO][4255] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0 calico-kube-controllers-bf57cd658- calico-system 10bb0e11-4f57-4c87-8485-4dadf3148ce0 884 0 2026-01-15 05:47:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bf57cd658 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-bf57cd658-gnjlw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliea6aa0571b5 [] [] }} ContainerID="946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" Namespace="calico-system" Pod="calico-kube-controllers-bf57cd658-gnjlw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-" Jan 15 05:48:09.346487 containerd[1600]: 2026-01-15 05:48:09.208 [INFO][4255] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" Namespace="calico-system" Pod="calico-kube-controllers-bf57cd658-gnjlw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0" Jan 15 05:48:09.346487 containerd[1600]: 2026-01-15 05:48:09.240 [INFO][4285] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" HandleID="k8s-pod-network.946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" Workload="localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0" Jan 15 05:48:09.346699 containerd[1600]: 2026-01-15 05:48:09.240 [INFO][4285] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" HandleID="k8s-pod-network.946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" Workload="localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f460), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-bf57cd658-gnjlw", "timestamp":"2026-01-15 05:48:09.240162256 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:48:09.346699 containerd[1600]: 2026-01-15 05:48:09.240 [INFO][4285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:48:09.346699 containerd[1600]: 2026-01-15 05:48:09.240 [INFO][4285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:48:09.346699 containerd[1600]: 2026-01-15 05:48:09.240 [INFO][4285] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:48:09.346699 containerd[1600]: 2026-01-15 05:48:09.254 [INFO][4285] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" host="localhost" Jan 15 05:48:09.346699 containerd[1600]: 2026-01-15 05:48:09.271 [INFO][4285] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:48:09.346699 containerd[1600]: 2026-01-15 05:48:09.279 [INFO][4285] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:48:09.346699 containerd[1600]: 2026-01-15 05:48:09.282 [INFO][4285] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:09.346699 containerd[1600]: 2026-01-15 05:48:09.285 [INFO][4285] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:09.346699 containerd[1600]: 2026-01-15 05:48:09.285 [INFO][4285] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" host="localhost" Jan 15 05:48:09.346914 containerd[1600]: 2026-01-15 05:48:09.290 [INFO][4285] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b Jan 15 05:48:09.346914 containerd[1600]: 2026-01-15 05:48:09.302 [INFO][4285] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" host="localhost" Jan 15 05:48:09.346914 containerd[1600]: 2026-01-15 05:48:09.308 [INFO][4285] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" host="localhost" Jan 15 05:48:09.346914 containerd[1600]: 2026-01-15 05:48:09.308 [INFO][4285] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" host="localhost" Jan 15 05:48:09.346914 containerd[1600]: 2026-01-15 05:48:09.308 [INFO][4285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:48:09.346914 containerd[1600]: 2026-01-15 05:48:09.308 [INFO][4285] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" HandleID="k8s-pod-network.946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" Workload="localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0" Jan 15 05:48:09.348574 containerd[1600]: 2026-01-15 05:48:09.313 [INFO][4255] cni-plugin/k8s.go 418: Populated endpoint ContainerID="946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" Namespace="calico-system" Pod="calico-kube-controllers-bf57cd658-gnjlw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0", GenerateName:"calico-kube-controllers-bf57cd658-", Namespace:"calico-system", SelfLink:"", UID:"10bb0e11-4f57-4c87-8485-4dadf3148ce0", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bf57cd658", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-bf57cd658-gnjlw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea6aa0571b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:09.348680 containerd[1600]: 2026-01-15 05:48:09.315 [INFO][4255] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" Namespace="calico-system" Pod="calico-kube-controllers-bf57cd658-gnjlw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0" Jan 15 05:48:09.348680 containerd[1600]: 2026-01-15 05:48:09.315 [INFO][4255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea6aa0571b5 ContainerID="946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" Namespace="calico-system" Pod="calico-kube-controllers-bf57cd658-gnjlw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0" Jan 15 05:48:09.348680 containerd[1600]: 2026-01-15 05:48:09.320 [INFO][4255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" Namespace="calico-system" Pod="calico-kube-controllers-bf57cd658-gnjlw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0" Jan 15 05:48:09.348739 containerd[1600]: 2026-01-15 05:48:09.320 [INFO][4255] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" Namespace="calico-system" Pod="calico-kube-controllers-bf57cd658-gnjlw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0", GenerateName:"calico-kube-controllers-bf57cd658-", Namespace:"calico-system", SelfLink:"", UID:"10bb0e11-4f57-4c87-8485-4dadf3148ce0", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bf57cd658", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b", Pod:"calico-kube-controllers-bf57cd658-gnjlw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea6aa0571b5", MAC:"52:05:18:a0:64:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:09.348821 containerd[1600]: 2026-01-15 05:48:09.339 [INFO][4255] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" Namespace="calico-system" Pod="calico-kube-controllers-bf57cd658-gnjlw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--bf57cd658--gnjlw-eth0" Jan 15 05:48:09.370000 audit[4310]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_chain pid=4310 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:48:09.370000 audit[4310]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffef54f5440 a2=0 a3=7ffef54f542c items=0 ppid=4062 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.370000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:48:09.413409 containerd[1600]: time="2026-01-15T05:48:09.412301092Z" level=info msg="connecting to shim 946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b" address="unix:///run/containerd/s/6b99d0eafbd75e108d00486fc39ddcc972c09f7f0be2daabdae787e5375102d2" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:48:09.464904 systemd-networkd[1507]: cali4e1d0f6d7b9: Link UP Jan 15 05:48:09.467950 systemd-networkd[1507]: cali4e1d0f6d7b9: Gained carrier Jan 15 05:48:09.494153 systemd[1]: Started cri-containerd-946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b.scope - libcontainer container 946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b. Jan 15 05:48:09.504670 containerd[1600]: 2026-01-15 05:48:09.206 [INFO][4258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0 calico-apiserver-6988569c94- calico-apiserver 6709b21e-b302-4c1a-b8f9-4c50d880ff8b 887 0 2026-01-15 05:47:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6988569c94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6988569c94-rjpxd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e1d0f6d7b9 [] [] }} ContainerID="75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-rjpxd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--rjpxd-" Jan 15 05:48:09.504670 containerd[1600]: 2026-01-15 05:48:09.207 [INFO][4258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-rjpxd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0" Jan 15 05:48:09.504670 containerd[1600]: 2026-01-15 05:48:09.249 [INFO][4283] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" HandleID="k8s-pod-network.75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" Workload="localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0" Jan 15 05:48:09.504960 containerd[1600]: 2026-01-15 05:48:09.249 [INFO][4283] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" HandleID="k8s-pod-network.75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" Workload="localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002de780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6988569c94-rjpxd", "timestamp":"2026-01-15 05:48:09.249291564 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:48:09.504960 containerd[1600]: 2026-01-15 05:48:09.249 [INFO][4283] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:48:09.504960 containerd[1600]: 2026-01-15 05:48:09.308 [INFO][4283] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:48:09.504960 containerd[1600]: 2026-01-15 05:48:09.308 [INFO][4283] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:48:09.504960 containerd[1600]: 2026-01-15 05:48:09.356 [INFO][4283] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" host="localhost" Jan 15 05:48:09.504960 containerd[1600]: 2026-01-15 05:48:09.386 [INFO][4283] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:48:09.504960 containerd[1600]: 2026-01-15 05:48:09.414 [INFO][4283] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:48:09.504960 containerd[1600]: 2026-01-15 05:48:09.421 [INFO][4283] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:09.504960 containerd[1600]: 2026-01-15 05:48:09.425 [INFO][4283] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:09.504960 containerd[1600]: 2026-01-15 05:48:09.425 [INFO][4283] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" host="localhost" Jan 15 05:48:09.505445 containerd[1600]: 2026-01-15 05:48:09.429 [INFO][4283] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14 Jan 15 05:48:09.505445 containerd[1600]: 2026-01-15 05:48:09.442 [INFO][4283] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" host="localhost" Jan 15 05:48:09.505445 containerd[1600]: 2026-01-15 05:48:09.454 [INFO][4283] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" host="localhost" Jan 15 05:48:09.505445 containerd[1600]: 2026-01-15 05:48:09.454 [INFO][4283] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" host="localhost" Jan 15 05:48:09.505445 containerd[1600]: 2026-01-15 05:48:09.454 [INFO][4283] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:48:09.505445 containerd[1600]: 2026-01-15 05:48:09.454 [INFO][4283] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" HandleID="k8s-pod-network.75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" Workload="localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0" Jan 15 05:48:09.505623 containerd[1600]: 2026-01-15 05:48:09.459 [INFO][4258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-rjpxd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0", GenerateName:"calico-apiserver-6988569c94-", Namespace:"calico-apiserver", SelfLink:"", UID:"6709b21e-b302-4c1a-b8f9-4c50d880ff8b", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6988569c94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6988569c94-rjpxd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e1d0f6d7b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:09.505716 containerd[1600]: 2026-01-15 05:48:09.459 [INFO][4258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-rjpxd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0" Jan 15 05:48:09.505716 containerd[1600]: 2026-01-15 05:48:09.459 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e1d0f6d7b9 ContainerID="75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-rjpxd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0" Jan 15 05:48:09.505716 containerd[1600]: 2026-01-15 05:48:09.466 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-rjpxd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0" Jan 15 05:48:09.505805 containerd[1600]: 2026-01-15 05:48:09.471 [INFO][4258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-rjpxd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0", GenerateName:"calico-apiserver-6988569c94-", Namespace:"calico-apiserver", SelfLink:"", UID:"6709b21e-b302-4c1a-b8f9-4c50d880ff8b", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6988569c94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14", Pod:"calico-apiserver-6988569c94-rjpxd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e1d0f6d7b9", MAC:"4e:99:38:62:64:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:09.505891 containerd[1600]: 2026-01-15 05:48:09.493 [INFO][4258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-rjpxd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--rjpxd-eth0" Jan 15 05:48:09.520000 audit: BPF prog-id=216 op=LOAD Jan 15 05:48:09.521000 audit: BPF prog-id=217 op=LOAD Jan 15 05:48:09.521000 audit[4333]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4321 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934366530356661326439653237363631663261333430373130396436 Jan 15 05:48:09.522000 audit: BPF prog-id=217 op=UNLOAD Jan 15 05:48:09.522000 audit[4333]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4321 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934366530356661326439653237363631663261333430373130396436 Jan 15 05:48:09.522000 audit: BPF prog-id=218 op=LOAD Jan 15 05:48:09.522000 audit[4333]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4321 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934366530356661326439653237363631663261333430373130396436 Jan 15 05:48:09.523000 audit[4359]: NETFILTER_CFG table=filter:126 family=2 entries=54 op=nft_register_chain pid=4359 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:48:09.523000 audit[4359]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffeded934e0 a2=0 a3=7ffeded934cc items=0 ppid=4062 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.523000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:48:09.523000 audit: BPF prog-id=219 op=LOAD Jan 15 05:48:09.523000 audit[4333]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4321 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934366530356661326439653237363631663261333430373130396436 Jan 15 05:48:09.523000 audit: BPF prog-id=219 op=UNLOAD Jan 15 05:48:09.523000 audit[4333]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4321 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934366530356661326439653237363631663261333430373130396436 Jan 15 05:48:09.523000 audit: BPF prog-id=218 op=UNLOAD Jan 15 05:48:09.523000 audit[4333]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4321 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934366530356661326439653237363631663261333430373130396436 Jan 15 05:48:09.523000 audit: BPF prog-id=220 op=LOAD Jan 15 05:48:09.523000 audit[4333]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4321 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934366530356661326439653237363631663261333430373130396436 Jan 15 05:48:09.528077 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:48:09.544396 containerd[1600]: time="2026-01-15T05:48:09.544155898Z" level=info msg="connecting to shim 75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14" address="unix:///run/containerd/s/423dec07c14d63aad72dab9acd59d8df1b0f2bd444fbcae9df644d41463aa1e2" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:48:09.589617 systemd[1]: Started cri-containerd-75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14.scope - libcontainer container 75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14. Jan 15 05:48:09.614606 containerd[1600]: time="2026-01-15T05:48:09.614533752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bf57cd658-gnjlw,Uid:10bb0e11-4f57-4c87-8485-4dadf3148ce0,Namespace:calico-system,Attempt:0,} returns sandbox id \"946e05fa2d9e27661f2a3407109d60e172dce910688ee380dbf3bb9f34dc0a5b\"" Jan 15 05:48:09.619683 containerd[1600]: time="2026-01-15T05:48:09.619620239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:48:09.631000 audit: BPF prog-id=221 op=LOAD Jan 15 05:48:09.635528 kernel: kauditd_printk_skb: 260 callbacks suppressed Jan 15 05:48:09.635690 kernel: audit: type=1334 audit(1768456089.632:667): prog-id=222 op=LOAD Jan 15 05:48:09.632000 audit: BPF prog-id=222 op=LOAD Jan 15 05:48:09.632000 audit[4379]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4368 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.641857 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:48:09.647980 kernel: audit: type=1300 audit(1768456089.632:667): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4368 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.648087 kernel: audit: type=1327 audit(1768456089.632:667): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735636133383338353434346131376334393532666264643563666639 Jan 15 05:48:09.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735636133383338353434346131376334393532666264643563666639 Jan 15 05:48:09.632000 audit: BPF prog-id=222 op=UNLOAD Jan 15 05:48:09.659775 kernel: audit: type=1334 audit(1768456089.632:668): prog-id=222 op=UNLOAD Jan 15 05:48:09.659838 kernel: audit: type=1300 audit(1768456089.632:668): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.632000 audit[4379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.669821 kernel: audit: type=1327 audit(1768456089.632:668): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735636133383338353434346131376334393532666264643563666639 Jan 15 05:48:09.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735636133383338353434346131376334393532666264643563666639 Jan 15 05:48:09.632000 audit: BPF prog-id=223 op=LOAD Jan 15 05:48:09.683195 kernel: audit: type=1334 audit(1768456089.632:669): prog-id=223 op=LOAD Jan 15 05:48:09.683246 kernel: audit: type=1300 audit(1768456089.632:669): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4368 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.632000 audit[4379]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4368 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.684519 containerd[1600]: time="2026-01-15T05:48:09.684430691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:09.689475 containerd[1600]: time="2026-01-15T05:48:09.689435332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:48:09.689604 containerd[1600]: time="2026-01-15T05:48:09.689494593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:09.689884 kubelet[2767]: E0115 05:48:09.689809 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:48:09.690256 kubelet[2767]: E0115 05:48:09.689887 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:48:09.690873 kubelet[2767]: E0115 05:48:09.690081 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blp4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-bf57cd658-gnjlw_calico-system(10bb0e11-4f57-4c87-8485-4dadf3148ce0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:09.692766 kubelet[2767]: E0115 05:48:09.692703 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:48:09.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735636133383338353434346131376334393532666264643563666639 Jan 15 05:48:09.705683 kernel: audit: type=1327 audit(1768456089.632:669): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735636133383338353434346131376334393532666264643563666639 Jan 15 05:48:09.705757 kernel: audit: type=1334 audit(1768456089.633:670): prog-id=224 op=LOAD Jan 15 05:48:09.633000 audit: BPF prog-id=224 op=LOAD Jan 15 05:48:09.633000 audit[4379]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4368 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735636133383338353434346131376334393532666264643563666639 Jan 15 05:48:09.633000 audit: BPF prog-id=224 op=UNLOAD Jan 15 05:48:09.633000 audit[4379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735636133383338353434346131376334393532666264643563666639 Jan 15 05:48:09.633000 audit: BPF prog-id=223 op=UNLOAD Jan 15 05:48:09.633000 audit[4379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735636133383338353434346131376334393532666264643563666639 Jan 15 05:48:09.633000 audit: BPF prog-id=225 op=LOAD Jan 15 05:48:09.633000 audit[4379]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4368 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:09.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735636133383338353434346131376334393532666264643563666639 Jan 15 05:48:09.712994 containerd[1600]: time="2026-01-15T05:48:09.712415020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6988569c94-rjpxd,Uid:6709b21e-b302-4c1a-b8f9-4c50d880ff8b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"75ca38385444a17c4952fbdd5cff9b2c3de79cff9f18b328c0810f9cc12e9d14\"" Jan 15 05:48:09.714262 containerd[1600]: time="2026-01-15T05:48:09.714112760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:48:09.772626 containerd[1600]: time="2026-01-15T05:48:09.772508633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:09.773987 containerd[1600]: time="2026-01-15T05:48:09.773886943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:48:09.773987 containerd[1600]: time="2026-01-15T05:48:09.773937748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:09.774332 kubelet[2767]: E0115 05:48:09.774287 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:09.774433 kubelet[2767]: E0115 05:48:09.774407 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:09.774778 kubelet[2767]: E0115 05:48:09.774678 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6bnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6988569c94-rjpxd_calico-apiserver(6709b21e-b302-4c1a-b8f9-4c50d880ff8b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:09.776089 kubelet[2767]: E0115 05:48:09.775905 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:48:10.146971 kubelet[2767]: E0115 05:48:10.146764 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:10.148109 containerd[1600]: time="2026-01-15T05:48:10.147853017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s9s7c,Uid:20e98906-80c1-467c-b4a1-25e2d8442e54,Namespace:kube-system,Attempt:0,}" Jan 15 05:48:10.323432 systemd-networkd[1507]: cali8c22b2513c6: Link UP Jan 15 05:48:10.324740 systemd-networkd[1507]: cali8c22b2513c6: Gained carrier Jan 15 05:48:10.328724 kubelet[2767]: E0115 05:48:10.328629 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:48:10.331803 kubelet[2767]: E0115 05:48:10.331764 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:48:10.361083 containerd[1600]: 2026-01-15 05:48:10.209 [INFO][4409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0 coredns-674b8bbfcf- kube-system 20e98906-80c1-467c-b4a1-25e2d8442e54 882 0 2026-01-15 05:47:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-s9s7c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8c22b2513c6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" Namespace="kube-system" Pod="coredns-674b8bbfcf-s9s7c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s9s7c-" Jan 15 05:48:10.361083 containerd[1600]: 2026-01-15 05:48:10.209 [INFO][4409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" Namespace="kube-system" Pod="coredns-674b8bbfcf-s9s7c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0" Jan 15 05:48:10.361083 containerd[1600]: 2026-01-15 05:48:10.258 [INFO][4422] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" HandleID="k8s-pod-network.4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" Workload="localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0" Jan 15 05:48:10.362869 containerd[1600]: 2026-01-15 05:48:10.258 [INFO][4422] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" HandleID="k8s-pod-network.4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" Workload="localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000529af0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-s9s7c", "timestamp":"2026-01-15 05:48:10.258695233 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:48:10.362869 containerd[1600]: 2026-01-15 05:48:10.258 [INFO][4422] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:48:10.362869 containerd[1600]: 2026-01-15 05:48:10.259 [INFO][4422] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:48:10.362869 containerd[1600]: 2026-01-15 05:48:10.259 [INFO][4422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:48:10.362869 containerd[1600]: 2026-01-15 05:48:10.268 [INFO][4422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" host="localhost" Jan 15 05:48:10.362869 containerd[1600]: 2026-01-15 05:48:10.276 [INFO][4422] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:48:10.362869 containerd[1600]: 2026-01-15 05:48:10.284 [INFO][4422] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:48:10.362869 containerd[1600]: 2026-01-15 05:48:10.287 [INFO][4422] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:10.362869 containerd[1600]: 2026-01-15 05:48:10.290 [INFO][4422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:10.362869 containerd[1600]: 2026-01-15 05:48:10.290 [INFO][4422] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" host="localhost" Jan 15 05:48:10.364484 containerd[1600]: 2026-01-15 05:48:10.293 [INFO][4422] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211 Jan 15 05:48:10.364484 containerd[1600]: 2026-01-15 05:48:10.305 [INFO][4422] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" host="localhost" Jan 15 05:48:10.364484 containerd[1600]: 2026-01-15 05:48:10.314 [INFO][4422] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" host="localhost" Jan 15 05:48:10.364484 containerd[1600]: 2026-01-15 05:48:10.314 [INFO][4422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" host="localhost" Jan 15 05:48:10.364484 containerd[1600]: 2026-01-15 05:48:10.314 [INFO][4422] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:48:10.364484 containerd[1600]: 2026-01-15 05:48:10.314 [INFO][4422] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" HandleID="k8s-pod-network.4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" Workload="localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0" Jan 15 05:48:10.364698 containerd[1600]: 2026-01-15 05:48:10.318 [INFO][4409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" Namespace="kube-system" Pod="coredns-674b8bbfcf-s9s7c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"20e98906-80c1-467c-b4a1-25e2d8442e54", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-s9s7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c22b2513c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:10.364831 containerd[1600]: 2026-01-15 05:48:10.318 [INFO][4409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" Namespace="kube-system" Pod="coredns-674b8bbfcf-s9s7c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0" Jan 15 05:48:10.364831 containerd[1600]: 2026-01-15 05:48:10.318 [INFO][4409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c22b2513c6 ContainerID="4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" Namespace="kube-system" Pod="coredns-674b8bbfcf-s9s7c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0" Jan 15 05:48:10.364831 containerd[1600]: 2026-01-15 05:48:10.325 [INFO][4409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" Namespace="kube-system" Pod="coredns-674b8bbfcf-s9s7c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0" Jan 15 05:48:10.364938 containerd[1600]: 2026-01-15 05:48:10.335 [INFO][4409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" Namespace="kube-system" Pod="coredns-674b8bbfcf-s9s7c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"20e98906-80c1-467c-b4a1-25e2d8442e54", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211", Pod:"coredns-674b8bbfcf-s9s7c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c22b2513c6", MAC:"5a:02:2c:73:98:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:10.364938 containerd[1600]: 2026-01-15 05:48:10.355 [INFO][4409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" Namespace="kube-system" Pod="coredns-674b8bbfcf-s9s7c" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--s9s7c-eth0" Jan 15 05:48:10.399000 audit[4440]: NETFILTER_CFG table=filter:127 family=2 entries=50 op=nft_register_chain pid=4440 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:48:10.399000 audit[4440]: SYSCALL arch=c000003e syscall=46 success=yes exit=24928 a0=3 a1=7ffd38b2d380 a2=0 a3=7ffd38b2d36c items=0 ppid=4062 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.399000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:48:10.417000 audit[4442]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=4442 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:10.417000 audit[4442]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffe0e4a640 a2=0 a3=7fffe0e4a62c items=0 ppid=2920 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:10.421631 systemd-networkd[1507]: caliea6aa0571b5: Gained IPv6LL Jan 15 05:48:10.426000 audit[4442]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=4442 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:10.426000 audit[4442]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffe0e4a640 a2=0 a3=0 items=0 ppid=2920 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:10.442247 containerd[1600]: time="2026-01-15T05:48:10.442147202Z" level=info msg="connecting to shim 4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211" address="unix:///run/containerd/s/9fda8699fb642e1fcc0aeb7bb5809239d24d008f0a701aface3f765cdbb722d6" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:48:10.484612 systemd[1]: Started cri-containerd-4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211.scope - libcontainer container 4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211. Jan 15 05:48:10.504000 audit: BPF prog-id=226 op=LOAD Jan 15 05:48:10.505000 audit: BPF prog-id=227 op=LOAD Jan 15 05:48:10.505000 audit[4463]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4452 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373130383963663235333263623062613630653762383930363363 Jan 15 05:48:10.505000 audit: BPF prog-id=227 op=UNLOAD Jan 15 05:48:10.505000 audit[4463]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373130383963663235333263623062613630653762383930363363 Jan 15 05:48:10.505000 audit: BPF prog-id=228 op=LOAD Jan 15 05:48:10.505000 audit[4463]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4452 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373130383963663235333263623062613630653762383930363363 Jan 15 05:48:10.505000 audit: BPF prog-id=229 op=LOAD Jan 15 05:48:10.505000 audit[4463]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4452 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373130383963663235333263623062613630653762383930363363 Jan 15 05:48:10.506000 audit: BPF prog-id=229 op=UNLOAD Jan 15 05:48:10.506000 audit[4463]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373130383963663235333263623062613630653762383930363363 Jan 15 05:48:10.506000 audit: BPF prog-id=228 op=UNLOAD Jan 15 05:48:10.506000 audit[4463]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373130383963663235333263623062613630653762383930363363 Jan 15 05:48:10.506000 audit: BPF prog-id=230 op=LOAD Jan 15 05:48:10.506000 audit[4463]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4452 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466373130383963663235333263623062613630653762383930363363 Jan 15 05:48:10.507725 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:48:10.555451 containerd[1600]: time="2026-01-15T05:48:10.555292593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s9s7c,Uid:20e98906-80c1-467c-b4a1-25e2d8442e54,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211\"" Jan 15 05:48:10.557658 kubelet[2767]: E0115 05:48:10.556948 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:10.576276 containerd[1600]: time="2026-01-15T05:48:10.576214048Z" level=info msg="CreateContainer within sandbox \"4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 05:48:10.591459 containerd[1600]: time="2026-01-15T05:48:10.591296455Z" level=info msg="Container 3daf153890c23b1f80ca5dfa96b9b61a5946deb03cb149797b335e6cea15cf8a: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:48:10.596507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4063725350.mount: Deactivated successfully. Jan 15 05:48:10.605746 containerd[1600]: time="2026-01-15T05:48:10.605669855Z" level=info msg="CreateContainer within sandbox \"4f71089cf2532cb0ba60e7b89063c636ab0507b73c72b35f6630cc88bc56b211\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3daf153890c23b1f80ca5dfa96b9b61a5946deb03cb149797b335e6cea15cf8a\"" Jan 15 05:48:10.606727 containerd[1600]: time="2026-01-15T05:48:10.606614466Z" level=info msg="StartContainer for \"3daf153890c23b1f80ca5dfa96b9b61a5946deb03cb149797b335e6cea15cf8a\"" Jan 15 05:48:10.608470 containerd[1600]: time="2026-01-15T05:48:10.608272027Z" level=info msg="connecting to shim 3daf153890c23b1f80ca5dfa96b9b61a5946deb03cb149797b335e6cea15cf8a" address="unix:///run/containerd/s/9fda8699fb642e1fcc0aeb7bb5809239d24d008f0a701aface3f765cdbb722d6" protocol=ttrpc version=3 Jan 15 05:48:10.649741 systemd[1]: Started cri-containerd-3daf153890c23b1f80ca5dfa96b9b61a5946deb03cb149797b335e6cea15cf8a.scope - libcontainer container 3daf153890c23b1f80ca5dfa96b9b61a5946deb03cb149797b335e6cea15cf8a. Jan 15 05:48:10.675000 audit: BPF prog-id=231 op=LOAD Jan 15 05:48:10.676000 audit: BPF prog-id=232 op=LOAD Jan 15 05:48:10.676000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4452 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616631353338393063323362316638306361356466613936623962 Jan 15 05:48:10.677000 audit: BPF prog-id=232 op=UNLOAD Jan 15 05:48:10.677000 audit[4488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616631353338393063323362316638306361356466613936623962 Jan 15 05:48:10.677000 audit: BPF prog-id=233 op=LOAD Jan 15 05:48:10.677000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4452 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616631353338393063323362316638306361356466613936623962 Jan 15 05:48:10.677000 audit: BPF prog-id=234 op=LOAD Jan 15 05:48:10.677000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4452 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616631353338393063323362316638306361356466613936623962 Jan 15 05:48:10.677000 audit: BPF prog-id=234 op=UNLOAD Jan 15 05:48:10.677000 audit[4488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616631353338393063323362316638306361356466613936623962 Jan 15 05:48:10.678000 audit: BPF prog-id=233 op=UNLOAD Jan 15 05:48:10.678000 audit[4488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616631353338393063323362316638306361356466613936623962 Jan 15 05:48:10.678000 audit: BPF prog-id=235 op=LOAD Jan 15 05:48:10.678000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4452 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:10.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364616631353338393063323362316638306361356466613936623962 Jan 15 05:48:10.711445 containerd[1600]: time="2026-01-15T05:48:10.711331612Z" level=info msg="StartContainer for \"3daf153890c23b1f80ca5dfa96b9b61a5946deb03cb149797b335e6cea15cf8a\" returns successfully" Jan 15 05:48:10.933711 systemd-networkd[1507]: cali4e1d0f6d7b9: Gained IPv6LL Jan 15 05:48:11.146584 kubelet[2767]: E0115 05:48:11.146471 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:11.147131 containerd[1600]: time="2026-01-15T05:48:11.146866905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxxt5,Uid:0e9df5a5-f471-4333-a2bf-d06852202d61,Namespace:kube-system,Attempt:0,}" Jan 15 05:48:11.291104 systemd-networkd[1507]: cali2c9ba0ad571: Link UP Jan 15 05:48:11.291405 systemd-networkd[1507]: cali2c9ba0ad571: Gained carrier Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.204 [INFO][4524] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0 coredns-674b8bbfcf- kube-system 0e9df5a5-f471-4333-a2bf-d06852202d61 883 0 2026-01-15 05:47:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-pxxt5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2c9ba0ad571 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxxt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxxt5-" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.205 [INFO][4524] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxxt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.239 [INFO][4540] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" HandleID="k8s-pod-network.990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" Workload="localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.239 [INFO][4540] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" HandleID="k8s-pod-network.990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" Workload="localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000520990), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-pxxt5", "timestamp":"2026-01-15 05:48:11.23943679 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.239 [INFO][4540] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.239 [INFO][4540] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.239 [INFO][4540] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.248 [INFO][4540] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" host="localhost" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.255 [INFO][4540] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.260 [INFO][4540] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.263 [INFO][4540] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.266 [INFO][4540] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.266 [INFO][4540] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" host="localhost" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.268 [INFO][4540] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1 Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.273 [INFO][4540] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" host="localhost" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.283 [INFO][4540] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" host="localhost" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.283 [INFO][4540] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" host="localhost" Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.283 [INFO][4540] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:48:11.312718 containerd[1600]: 2026-01-15 05:48:11.283 [INFO][4540] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" HandleID="k8s-pod-network.990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" Workload="localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0" Jan 15 05:48:11.314083 containerd[1600]: 2026-01-15 05:48:11.286 [INFO][4524] cni-plugin/k8s.go 418: Populated endpoint ContainerID="990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxxt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0e9df5a5-f471-4333-a2bf-d06852202d61", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-pxxt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c9ba0ad571", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:11.314083 containerd[1600]: 2026-01-15 05:48:11.286 [INFO][4524] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxxt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0" Jan 15 05:48:11.314083 containerd[1600]: 2026-01-15 05:48:11.286 [INFO][4524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c9ba0ad571 ContainerID="990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxxt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0" Jan 15 05:48:11.314083 containerd[1600]: 2026-01-15 05:48:11.290 [INFO][4524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxxt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0" Jan 15 05:48:11.314083 containerd[1600]: 2026-01-15 05:48:11.290 [INFO][4524] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxxt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0e9df5a5-f471-4333-a2bf-d06852202d61", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1", Pod:"coredns-674b8bbfcf-pxxt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c9ba0ad571", MAC:"ae:dc:f6:7f:c5:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:11.314083 containerd[1600]: 2026-01-15 05:48:11.309 [INFO][4524] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" Namespace="kube-system" Pod="coredns-674b8bbfcf-pxxt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pxxt5-eth0" Jan 15 05:48:11.342319 kubelet[2767]: E0115 05:48:11.341892 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:48:11.341000 audit[4558]: NETFILTER_CFG table=filter:130 family=2 entries=44 op=nft_register_chain pid=4558 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:48:11.341000 audit[4558]: SYSCALL arch=c000003e syscall=46 success=yes exit=21532 a0=3 a1=7fff24c95bc0 a2=0 a3=7fff24c95bac items=0 ppid=4062 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.341000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:48:11.343628 kubelet[2767]: E0115 05:48:11.343574 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:48:11.343927 kubelet[2767]: E0115 05:48:11.343822 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:11.345568 containerd[1600]: time="2026-01-15T05:48:11.345403910Z" level=info msg="connecting to shim 990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1" address="unix:///run/containerd/s/dad210375cf59e65e8fc5128dc09683ea0310095bf35c7eac32980aeefe06234" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:48:11.405792 systemd[1]: Started cri-containerd-990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1.scope - libcontainer container 990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1. Jan 15 05:48:11.423005 kubelet[2767]: I0115 05:48:11.422803 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s9s7c" podStartSLOduration=35.422788786 podStartE2EDuration="35.422788786s" podCreationTimestamp="2026-01-15 05:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:48:11.396586416 +0000 UTC m=+39.458683831" watchObservedRunningTime="2026-01-15 05:48:11.422788786 +0000 UTC m=+39.484886181" Jan 15 05:48:11.441000 audit[4596]: NETFILTER_CFG table=filter:131 family=2 entries=17 op=nft_register_rule pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:11.441000 audit[4596]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffafb3a280 a2=0 a3=7fffafb3a26c items=0 ppid=2920 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.441000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:11.449000 audit: BPF prog-id=236 op=LOAD Jan 15 05:48:11.448000 audit[4596]: NETFILTER_CFG table=nat:132 family=2 entries=35 op=nft_register_chain pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:11.448000 audit[4596]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffafb3a280 a2=0 a3=7fffafb3a26c items=0 ppid=2920 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.450000 audit: BPF prog-id=237 op=LOAD Jan 15 05:48:11.450000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4565 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306433646139323935303762623964623065393763343666663462 Jan 15 05:48:11.450000 audit: BPF prog-id=237 op=UNLOAD Jan 15 05:48:11.450000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306433646139323935303762623964623065393763343666663462 Jan 15 05:48:11.450000 audit: BPF prog-id=238 op=LOAD Jan 15 05:48:11.450000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4565 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306433646139323935303762623964623065393763343666663462 Jan 15 05:48:11.450000 audit: BPF prog-id=239 op=LOAD Jan 15 05:48:11.450000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4565 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306433646139323935303762623964623065393763343666663462 Jan 15 05:48:11.450000 audit: BPF prog-id=239 op=UNLOAD Jan 15 05:48:11.450000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306433646139323935303762623964623065393763343666663462 Jan 15 05:48:11.450000 audit: BPF prog-id=238 op=UNLOAD Jan 15 05:48:11.450000 audit[4576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306433646139323935303762623964623065393763343666663462 Jan 15 05:48:11.450000 audit: BPF prog-id=240 op=LOAD Jan 15 05:48:11.450000 audit[4576]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4565 pid=4576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306433646139323935303762623964623065393763343666663462 Jan 15 05:48:11.452509 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:48:11.448000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:11.515790 containerd[1600]: time="2026-01-15T05:48:11.515654176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pxxt5,Uid:0e9df5a5-f471-4333-a2bf-d06852202d61,Namespace:kube-system,Attempt:0,} returns sandbox id \"990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1\"" Jan 15 05:48:11.517904 kubelet[2767]: E0115 05:48:11.517755 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:11.525333 containerd[1600]: time="2026-01-15T05:48:11.525258508Z" level=info msg="CreateContainer within sandbox \"990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 05:48:11.541611 containerd[1600]: time="2026-01-15T05:48:11.541328371Z" level=info msg="Container 88fca74e7ac9bfa34ef54d85478724c1c88bf39c7c0e129d05b6f8b5590c3076: CDI devices from CRI Config.CDIDevices: []" Jan 15 05:48:11.550372 containerd[1600]: time="2026-01-15T05:48:11.550286413Z" level=info msg="CreateContainer within sandbox \"990d3da929507bb9db0e97c46ff4bfbecc25d14564f0e0174df835bbc71c1dc1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88fca74e7ac9bfa34ef54d85478724c1c88bf39c7c0e129d05b6f8b5590c3076\"" Jan 15 05:48:11.551311 containerd[1600]: time="2026-01-15T05:48:11.551252295Z" level=info msg="StartContainer for \"88fca74e7ac9bfa34ef54d85478724c1c88bf39c7c0e129d05b6f8b5590c3076\"" Jan 15 05:48:11.553329 containerd[1600]: time="2026-01-15T05:48:11.553174368Z" level=info msg="connecting to shim 88fca74e7ac9bfa34ef54d85478724c1c88bf39c7c0e129d05b6f8b5590c3076" address="unix:///run/containerd/s/dad210375cf59e65e8fc5128dc09683ea0310095bf35c7eac32980aeefe06234" protocol=ttrpc version=3 Jan 15 05:48:11.585583 systemd[1]: Started cri-containerd-88fca74e7ac9bfa34ef54d85478724c1c88bf39c7c0e129d05b6f8b5590c3076.scope - libcontainer container 88fca74e7ac9bfa34ef54d85478724c1c88bf39c7c0e129d05b6f8b5590c3076. Jan 15 05:48:11.614000 audit: BPF prog-id=241 op=LOAD Jan 15 05:48:11.615000 audit: BPF prog-id=242 op=LOAD Jan 15 05:48:11.615000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4565 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666361373465376163396266613334656635346438353437383732 Jan 15 05:48:11.616000 audit: BPF prog-id=242 op=UNLOAD Jan 15 05:48:11.616000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666361373465376163396266613334656635346438353437383732 Jan 15 05:48:11.616000 audit: BPF prog-id=243 op=LOAD Jan 15 05:48:11.616000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4565 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666361373465376163396266613334656635346438353437383732 Jan 15 05:48:11.616000 audit: BPF prog-id=244 op=LOAD Jan 15 05:48:11.616000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4565 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666361373465376163396266613334656635346438353437383732 Jan 15 05:48:11.616000 audit: BPF prog-id=244 op=UNLOAD Jan 15 05:48:11.616000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666361373465376163396266613334656635346438353437383732 Jan 15 05:48:11.616000 audit: BPF prog-id=243 op=UNLOAD Jan 15 05:48:11.616000 audit[4603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666361373465376163396266613334656635346438353437383732 Jan 15 05:48:11.616000 audit: BPF prog-id=245 op=LOAD Jan 15 05:48:11.616000 audit[4603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4565 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:11.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838666361373465376163396266613334656635346438353437383732 Jan 15 05:48:11.668994 containerd[1600]: time="2026-01-15T05:48:11.668898770Z" level=info msg="StartContainer for \"88fca74e7ac9bfa34ef54d85478724c1c88bf39c7c0e129d05b6f8b5590c3076\" returns successfully" Jan 15 05:48:11.957593 systemd-networkd[1507]: cali8c22b2513c6: Gained IPv6LL Jan 15 05:48:12.151131 containerd[1600]: time="2026-01-15T05:48:12.150755549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dt9mp,Uid:cdab6cdb-eee3-4132-9980-23cedc6f5612,Namespace:calico-system,Attempt:0,}" Jan 15 05:48:12.154648 containerd[1600]: time="2026-01-15T05:48:12.152169336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s4fx8,Uid:1a790238-cc0e-45da-b99f-c6adf406e452,Namespace:calico-system,Attempt:0,}" Jan 15 05:48:12.154648 containerd[1600]: time="2026-01-15T05:48:12.154421225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6988569c94-7xzb7,Uid:6b6565ef-726d-4164-a834-22cf5e5bfe9a,Namespace:calico-apiserver,Attempt:0,}" Jan 15 05:48:12.347619 kubelet[2767]: E0115 05:48:12.347570 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:12.349573 kubelet[2767]: E0115 05:48:12.348556 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:12.368794 kubelet[2767]: I0115 05:48:12.368620 2767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pxxt5" podStartSLOduration=36.36860383 podStartE2EDuration="36.36860383s" podCreationTimestamp="2026-01-15 05:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 05:48:12.367624853 +0000 UTC m=+40.429722249" watchObservedRunningTime="2026-01-15 05:48:12.36860383 +0000 UTC m=+40.430701225" Jan 15 05:48:12.392726 systemd-networkd[1507]: caliab6ae3f8041: Link UP Jan 15 05:48:12.393850 systemd-networkd[1507]: caliab6ae3f8041: Gained carrier Jan 15 05:48:12.411000 audit[4712]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=4712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:12.411000 audit[4712]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde7bd6160 a2=0 a3=7ffde7bd614c items=0 ppid=2920 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:12.425000 audit[4712]: NETFILTER_CFG table=nat:134 family=2 entries=44 op=nft_register_rule pid=4712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:12.425000 audit[4712]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffde7bd6160 a2=0 a3=7ffde7bd614c items=0 ppid=2920 pid=4712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.247 [INFO][4641] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dt9mp-eth0 csi-node-driver- calico-system cdab6cdb-eee3-4132-9980-23cedc6f5612 773 0 2026-01-15 05:47:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dt9mp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliab6ae3f8041 [] [] }} ContainerID="e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" Namespace="calico-system" Pod="csi-node-driver-dt9mp" WorkloadEndpoint="localhost-k8s-csi--node--driver--dt9mp-" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.247 [INFO][4641] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" Namespace="calico-system" Pod="csi-node-driver-dt9mp" WorkloadEndpoint="localhost-k8s-csi--node--driver--dt9mp-eth0" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.310 [INFO][4686] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" HandleID="k8s-pod-network.e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" Workload="localhost-k8s-csi--node--driver--dt9mp-eth0" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.310 [INFO][4686] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" HandleID="k8s-pod-network.e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" Workload="localhost-k8s-csi--node--driver--dt9mp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e40c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dt9mp", "timestamp":"2026-01-15 05:48:12.310314153 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.310 [INFO][4686] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.311 [INFO][4686] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.311 [INFO][4686] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.320 [INFO][4686] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" host="localhost" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.333 [INFO][4686] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.339 [INFO][4686] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.343 [INFO][4686] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.346 [INFO][4686] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.346 [INFO][4686] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" host="localhost" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.350 [INFO][4686] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012 Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.359 [INFO][4686] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" host="localhost" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.373 [INFO][4686] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" host="localhost" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.373 [INFO][4686] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" host="localhost" Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.373 [INFO][4686] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:48:12.429092 containerd[1600]: 2026-01-15 05:48:12.373 [INFO][4686] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" HandleID="k8s-pod-network.e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" Workload="localhost-k8s-csi--node--driver--dt9mp-eth0" Jan 15 05:48:12.430907 containerd[1600]: 2026-01-15 05:48:12.381 [INFO][4641] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" Namespace="calico-system" Pod="csi-node-driver-dt9mp" WorkloadEndpoint="localhost-k8s-csi--node--driver--dt9mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dt9mp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdab6cdb-eee3-4132-9980-23cedc6f5612", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dt9mp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab6ae3f8041", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:12.430907 containerd[1600]: 2026-01-15 05:48:12.381 [INFO][4641] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" Namespace="calico-system" Pod="csi-node-driver-dt9mp" WorkloadEndpoint="localhost-k8s-csi--node--driver--dt9mp-eth0" Jan 15 05:48:12.430907 containerd[1600]: 2026-01-15 05:48:12.382 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab6ae3f8041 ContainerID="e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" Namespace="calico-system" Pod="csi-node-driver-dt9mp" WorkloadEndpoint="localhost-k8s-csi--node--driver--dt9mp-eth0" Jan 15 05:48:12.430907 containerd[1600]: 2026-01-15 05:48:12.394 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" Namespace="calico-system" Pod="csi-node-driver-dt9mp" WorkloadEndpoint="localhost-k8s-csi--node--driver--dt9mp-eth0" Jan 15 05:48:12.430907 containerd[1600]: 2026-01-15 05:48:12.396 [INFO][4641] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" Namespace="calico-system" Pod="csi-node-driver-dt9mp" WorkloadEndpoint="localhost-k8s-csi--node--driver--dt9mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dt9mp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdab6cdb-eee3-4132-9980-23cedc6f5612", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012", Pod:"csi-node-driver-dt9mp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab6ae3f8041", MAC:"46:90:a1:e4:2f:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:12.430907 containerd[1600]: 2026-01-15 05:48:12.420 [INFO][4641] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" Namespace="calico-system" Pod="csi-node-driver-dt9mp" WorkloadEndpoint="localhost-k8s-csi--node--driver--dt9mp-eth0" Jan 15 05:48:12.457000 audit[4721]: NETFILTER_CFG table=filter:135 family=2 entries=52 op=nft_register_chain pid=4721 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:48:12.457000 audit[4721]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7fffa3cf5a70 a2=0 a3=7fffa3cf5a5c items=0 ppid=4062 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.457000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:48:12.488168 containerd[1600]: time="2026-01-15T05:48:12.488076582Z" level=info msg="connecting to shim e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012" address="unix:///run/containerd/s/87416716198c15f9871f9cc0eaeeef400a60e7997092aab5ca64c7eaac16acc2" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:48:12.502000 audit[4737]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=4737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:12.502000 audit[4737]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4336ac60 a2=0 a3=7ffc4336ac4c items=0 ppid=2920 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:12.536000 audit[4737]: NETFILTER_CFG table=nat:137 family=2 entries=56 op=nft_register_chain pid=4737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:12.536000 audit[4737]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc4336ac60 a2=0 a3=7ffc4336ac4c items=0 ppid=2920 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:12.552941 systemd-networkd[1507]: califc3cab5fc09: Link UP Jan 15 05:48:12.555405 systemd-networkd[1507]: califc3cab5fc09: Gained carrier Jan 15 05:48:12.555827 systemd[1]: Started cri-containerd-e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012.scope - libcontainer container e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012. Jan 15 05:48:12.578000 audit: BPF prog-id=246 op=LOAD Jan 15 05:48:12.581000 audit: BPF prog-id=247 op=LOAD Jan 15 05:48:12.581000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532336130323938656638393062333631373135623564656464643630 Jan 15 05:48:12.583000 audit: BPF prog-id=247 op=UNLOAD Jan 15 05:48:12.583000 audit[4743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532336130323938656638393062333631373135623564656464643630 Jan 15 05:48:12.583000 audit: BPF prog-id=248 op=LOAD Jan 15 05:48:12.583000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532336130323938656638393062333631373135623564656464643630 Jan 15 05:48:12.584000 audit: BPF prog-id=249 op=LOAD Jan 15 05:48:12.584000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532336130323938656638393062333631373135623564656464643630 Jan 15 05:48:12.584000 audit: BPF prog-id=249 op=UNLOAD Jan 15 05:48:12.584000 audit[4743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532336130323938656638393062333631373135623564656464643630 Jan 15 05:48:12.585000 audit: BPF prog-id=248 op=UNLOAD Jan 15 05:48:12.585000 audit[4743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532336130323938656638393062333631373135623564656464643630 Jan 15 05:48:12.585000 audit: BPF prog-id=250 op=LOAD Jan 15 05:48:12.585000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532336130323938656638393062333631373135623564656464643630 Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.256 [INFO][4642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0 calico-apiserver-6988569c94- calico-apiserver 6b6565ef-726d-4164-a834-22cf5e5bfe9a 886 0 2026-01-15 05:47:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6988569c94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6988569c94-7xzb7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califc3cab5fc09 [] [] }} ContainerID="2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-7xzb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--7xzb7-" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.257 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-7xzb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.327 [INFO][4694] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" HandleID="k8s-pod-network.2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" Workload="localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.328 [INFO][4694] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" HandleID="k8s-pod-network.2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" Workload="localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6988569c94-7xzb7", "timestamp":"2026-01-15 05:48:12.327723863 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.328 [INFO][4694] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.373 [INFO][4694] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.374 [INFO][4694] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.423 [INFO][4694] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" host="localhost" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.444 [INFO][4694] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.454 [INFO][4694] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.462 [INFO][4694] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.473 [INFO][4694] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.474 [INFO][4694] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" host="localhost" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.480 [INFO][4694] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65 Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.486 [INFO][4694] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" host="localhost" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.508 [INFO][4694] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" host="localhost" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.508 [INFO][4694] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" host="localhost" Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.508 [INFO][4694] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:48:12.588410 containerd[1600]: 2026-01-15 05:48:12.508 [INFO][4694] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" HandleID="k8s-pod-network.2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" Workload="localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0" Jan 15 05:48:12.589053 containerd[1600]: 2026-01-15 05:48:12.533 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-7xzb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0", GenerateName:"calico-apiserver-6988569c94-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b6565ef-726d-4164-a834-22cf5e5bfe9a", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6988569c94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6988569c94-7xzb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc3cab5fc09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:12.589053 containerd[1600]: 2026-01-15 05:48:12.533 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-7xzb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0" Jan 15 05:48:12.589053 containerd[1600]: 2026-01-15 05:48:12.533 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc3cab5fc09 ContainerID="2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-7xzb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0" Jan 15 05:48:12.589053 containerd[1600]: 2026-01-15 05:48:12.556 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-7xzb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0" Jan 15 05:48:12.589053 containerd[1600]: 2026-01-15 05:48:12.559 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-7xzb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0", GenerateName:"calico-apiserver-6988569c94-", Namespace:"calico-apiserver", SelfLink:"", UID:"6b6565ef-726d-4164-a834-22cf5e5bfe9a", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6988569c94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65", Pod:"calico-apiserver-6988569c94-7xzb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc3cab5fc09", MAC:"3a:a0:72:a3:33:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:12.589053 containerd[1600]: 2026-01-15 05:48:12.579 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" Namespace="calico-apiserver" Pod="calico-apiserver-6988569c94-7xzb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--6988569c94--7xzb7-eth0" Jan 15 05:48:12.589632 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:48:12.614000 audit[4778]: NETFILTER_CFG table=filter:138 family=2 entries=57 op=nft_register_chain pid=4778 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:48:12.614000 audit[4778]: SYSCALL arch=c000003e syscall=46 success=yes exit=27828 a0=3 a1=7ffc7e8b0760 a2=0 a3=7ffc7e8b074c items=0 ppid=4062 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.614000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:48:12.631019 containerd[1600]: time="2026-01-15T05:48:12.630417622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dt9mp,Uid:cdab6cdb-eee3-4132-9980-23cedc6f5612,Namespace:calico-system,Attempt:0,} returns sandbox id \"e23a0298ef890b361715b5deddd609506778311b1badc1c0c5cd638b1beee012\"" Jan 15 05:48:12.634837 containerd[1600]: time="2026-01-15T05:48:12.634780699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 05:48:12.636487 systemd-networkd[1507]: cali4e33d8074e0: Link UP Jan 15 05:48:12.638629 systemd-networkd[1507]: cali4e33d8074e0: Gained carrier Jan 15 05:48:12.648630 containerd[1600]: time="2026-01-15T05:48:12.648563158Z" level=info msg="connecting to shim 2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65" address="unix:///run/containerd/s/d2fed2c99d45a815c07f447499ae38c4a59833503650e72ae11d3e5d0a48c255" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.260 [INFO][4639] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--s4fx8-eth0 goldmane-666569f655- calico-system 1a790238-cc0e-45da-b99f-c6adf406e452 885 0 2026-01-15 05:47:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-s4fx8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4e33d8074e0 [] [] }} ContainerID="400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" Namespace="calico-system" Pod="goldmane-666569f655-s4fx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s4fx8-" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.260 [INFO][4639] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" Namespace="calico-system" Pod="goldmane-666569f655-s4fx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s4fx8-eth0" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.329 [INFO][4692] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" HandleID="k8s-pod-network.400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" Workload="localhost-k8s-goldmane--666569f655--s4fx8-eth0" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.330 [INFO][4692] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" HandleID="k8s-pod-network.400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" Workload="localhost-k8s-goldmane--666569f655--s4fx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001136b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-s4fx8", "timestamp":"2026-01-15 05:48:12.329440013 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.330 [INFO][4692] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.509 [INFO][4692] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.509 [INFO][4692] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.538 [INFO][4692] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" host="localhost" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.562 [INFO][4692] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.581 [INFO][4692] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.585 [INFO][4692] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.591 [INFO][4692] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.591 [INFO][4692] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" host="localhost" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.595 [INFO][4692] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274 Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.603 [INFO][4692] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" host="localhost" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.621 [INFO][4692] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" host="localhost" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.621 [INFO][4692] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" host="localhost" Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.621 [INFO][4692] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 05:48:12.669427 containerd[1600]: 2026-01-15 05:48:12.621 [INFO][4692] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" HandleID="k8s-pod-network.400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" Workload="localhost-k8s-goldmane--666569f655--s4fx8-eth0" Jan 15 05:48:12.671545 containerd[1600]: 2026-01-15 05:48:12.627 [INFO][4639] cni-plugin/k8s.go 418: Populated endpoint ContainerID="400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" Namespace="calico-system" Pod="goldmane-666569f655-s4fx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s4fx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--s4fx8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1a790238-cc0e-45da-b99f-c6adf406e452", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-s4fx8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e33d8074e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:12.671545 containerd[1600]: 2026-01-15 05:48:12.627 [INFO][4639] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" Namespace="calico-system" Pod="goldmane-666569f655-s4fx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s4fx8-eth0" Jan 15 05:48:12.671545 containerd[1600]: 2026-01-15 05:48:12.627 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e33d8074e0 ContainerID="400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" Namespace="calico-system" Pod="goldmane-666569f655-s4fx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s4fx8-eth0" Jan 15 05:48:12.671545 containerd[1600]: 2026-01-15 05:48:12.641 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" Namespace="calico-system" Pod="goldmane-666569f655-s4fx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s4fx8-eth0" Jan 15 05:48:12.671545 containerd[1600]: 2026-01-15 05:48:12.644 [INFO][4639] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" Namespace="calico-system" Pod="goldmane-666569f655-s4fx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s4fx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--s4fx8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1a790238-cc0e-45da-b99f-c6adf406e452", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 5, 47, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274", Pod:"goldmane-666569f655-s4fx8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4e33d8074e0", MAC:"4e:cc:10:0c:9a:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 05:48:12.671545 containerd[1600]: 2026-01-15 05:48:12.661 [INFO][4639] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" Namespace="calico-system" Pod="goldmane-666569f655-s4fx8" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--s4fx8-eth0" Jan 15 05:48:12.702000 audit[4816]: NETFILTER_CFG table=filter:139 family=2 entries=74 op=nft_register_chain pid=4816 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 15 05:48:12.702000 audit[4816]: SYSCALL arch=c000003e syscall=46 success=yes exit=35160 a0=3 a1=7ffc1946fef0 a2=0 a3=7ffc1946fedc items=0 ppid=4062 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.702000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 15 05:48:12.712443 containerd[1600]: time="2026-01-15T05:48:12.712394223Z" level=info msg="connecting to shim 400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274" address="unix:///run/containerd/s/8b84bbb0fc5b0b62e095e9a0806a02e92b4dfe9fe075abbf9df460264a17eb01" namespace=k8s.io protocol=ttrpc version=3 Jan 15 05:48:12.716888 systemd[1]: Started cri-containerd-2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65.scope - libcontainer container 2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65. Jan 15 05:48:12.740973 containerd[1600]: time="2026-01-15T05:48:12.740831460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:12.742339 containerd[1600]: time="2026-01-15T05:48:12.742225971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 05:48:12.742339 containerd[1600]: time="2026-01-15T05:48:12.742310233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:12.743397 kubelet[2767]: E0115 05:48:12.743060 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:48:12.743397 kubelet[2767]: E0115 05:48:12.743298 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:48:12.743597 kubelet[2767]: E0115 05:48:12.743500 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtr9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dt9mp_calico-system(cdab6cdb-eee3-4132-9980-23cedc6f5612): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:12.752122 containerd[1600]: time="2026-01-15T05:48:12.752013784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 05:48:12.751000 audit: BPF prog-id=251 op=LOAD Jan 15 05:48:12.753000 audit: BPF prog-id=252 op=LOAD Jan 15 05:48:12.753000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4792 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376537643536616434346439306234383162616366333532393435 Jan 15 05:48:12.753000 audit: BPF prog-id=252 op=UNLOAD Jan 15 05:48:12.753000 audit[4801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4792 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376537643536616434346439306234383162616366333532393435 Jan 15 05:48:12.753000 audit: BPF prog-id=253 op=LOAD Jan 15 05:48:12.753000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4792 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376537643536616434346439306234383162616366333532393435 Jan 15 05:48:12.753000 audit: BPF prog-id=254 op=LOAD Jan 15 05:48:12.753000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4792 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376537643536616434346439306234383162616366333532393435 Jan 15 05:48:12.754000 audit: BPF prog-id=254 op=UNLOAD Jan 15 05:48:12.754000 audit[4801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4792 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376537643536616434346439306234383162616366333532393435 Jan 15 05:48:12.754000 audit: BPF prog-id=253 op=UNLOAD Jan 15 05:48:12.754000 audit[4801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4792 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376537643536616434346439306234383162616366333532393435 Jan 15 05:48:12.754000 audit: BPF prog-id=255 op=LOAD Jan 15 05:48:12.754000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4792 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266376537643536616434346439306234383162616366333532393435 Jan 15 05:48:12.756435 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:48:12.783275 systemd[1]: Started cri-containerd-400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274.scope - libcontainer container 400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274. Jan 15 05:48:12.830837 containerd[1600]: time="2026-01-15T05:48:12.830538372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6988569c94-7xzb7,Uid:6b6565ef-726d-4164-a834-22cf5e5bfe9a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2f7e7d56ad44d90b481bacf3529456bc7fe303f86899a03f55c084b6a3511c65\"" Jan 15 05:48:12.847141 containerd[1600]: time="2026-01-15T05:48:12.846858926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:12.848868 containerd[1600]: time="2026-01-15T05:48:12.848791323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 05:48:12.849113 containerd[1600]: time="2026-01-15T05:48:12.848984102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:12.849587 kubelet[2767]: E0115 05:48:12.849551 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:48:12.849682 kubelet[2767]: E0115 05:48:12.849667 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:48:12.850126 kubelet[2767]: E0115 05:48:12.849964 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtr9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dt9mp_calico-system(cdab6cdb-eee3-4132-9980-23cedc6f5612): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:12.850629 containerd[1600]: time="2026-01-15T05:48:12.850610325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:48:12.851626 kubelet[2767]: E0115 05:48:12.851503 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:48:12.870000 audit: BPF prog-id=256 op=LOAD Jan 15 05:48:12.871000 audit: BPF prog-id=257 op=LOAD Jan 15 05:48:12.871000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4829 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306330393130336266663039323034316537373030613536373037 Jan 15 05:48:12.871000 audit: BPF prog-id=257 op=UNLOAD Jan 15 05:48:12.871000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306330393130336266663039323034316537373030613536373037 Jan 15 05:48:12.871000 audit: BPF prog-id=258 op=LOAD Jan 15 05:48:12.871000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4829 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306330393130336266663039323034316537373030613536373037 Jan 15 05:48:12.871000 audit: BPF prog-id=259 op=LOAD Jan 15 05:48:12.871000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4829 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306330393130336266663039323034316537373030613536373037 Jan 15 05:48:12.871000 audit: BPF prog-id=259 op=UNLOAD Jan 15 05:48:12.871000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306330393130336266663039323034316537373030613536373037 Jan 15 05:48:12.871000 audit: BPF prog-id=258 op=UNLOAD Jan 15 05:48:12.871000 audit[4849]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306330393130336266663039323034316537373030613536373037 Jan 15 05:48:12.871000 audit: BPF prog-id=260 op=LOAD Jan 15 05:48:12.871000 audit[4849]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4829 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:12.871000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430306330393130336266663039323034316537373030613536373037 Jan 15 05:48:12.873120 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 15 05:48:12.908872 containerd[1600]: time="2026-01-15T05:48:12.908834849Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:12.910832 containerd[1600]: time="2026-01-15T05:48:12.910638052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:48:12.911152 containerd[1600]: time="2026-01-15T05:48:12.910792450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:12.912397 kubelet[2767]: E0115 05:48:12.911459 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:12.912397 kubelet[2767]: E0115 05:48:12.911496 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:12.912397 kubelet[2767]: E0115 05:48:12.911601 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stvc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6988569c94-7xzb7_calico-apiserver(6b6565ef-726d-4164-a834-22cf5e5bfe9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:12.912833 kubelet[2767]: E0115 05:48:12.912759 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a" Jan 15 05:48:12.928444 containerd[1600]: time="2026-01-15T05:48:12.928152615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-s4fx8,Uid:1a790238-cc0e-45da-b99f-c6adf406e452,Namespace:calico-system,Attempt:0,} returns sandbox id \"400c09103bff092041e7700a567072b95691a6d2732513450b4d454cc96cd274\"" Jan 15 05:48:12.933066 containerd[1600]: time="2026-01-15T05:48:12.932798506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 05:48:12.998755 containerd[1600]: time="2026-01-15T05:48:12.998668932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:13.000379 containerd[1600]: time="2026-01-15T05:48:13.000283652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 05:48:13.000452 containerd[1600]: time="2026-01-15T05:48:13.000407512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:13.000686 kubelet[2767]: E0115 05:48:13.000631 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:48:13.000763 kubelet[2767]: E0115 05:48:13.000688 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:48:13.000911 kubelet[2767]: E0115 05:48:13.000849 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gpmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s4fx8_calico-system(1a790238-cc0e-45da-b99f-c6adf406e452): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:13.002421 kubelet[2767]: E0115 05:48:13.002216 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:48:13.238219 systemd-networkd[1507]: cali2c9ba0ad571: Gained IPv6LL Jan 15 05:48:13.353840 kubelet[2767]: E0115 05:48:13.353724 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:48:13.355306 kubelet[2767]: E0115 05:48:13.355217 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:48:13.356763 kubelet[2767]: E0115 05:48:13.356593 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:13.356763 kubelet[2767]: E0115 05:48:13.356674 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:13.358650 kubelet[2767]: E0115 05:48:13.358584 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a" Jan 15 05:48:13.446000 audit[4886]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:13.446000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6c8d9b00 a2=0 a3=7ffc6c8d9aec items=0 ppid=2920 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:13.446000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:13.452000 audit[4886]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:48:13.452000 audit[4886]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc6c8d9b00 a2=0 a3=7ffc6c8d9aec items=0 ppid=2920 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:13.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:48:13.685622 systemd-networkd[1507]: cali4e33d8074e0: Gained IPv6LL Jan 15 05:48:13.749642 systemd-networkd[1507]: califc3cab5fc09: Gained IPv6LL Jan 15 05:48:14.262721 systemd-networkd[1507]: caliab6ae3f8041: Gained IPv6LL Jan 15 05:48:14.360289 kubelet[2767]: E0115 05:48:14.360130 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:48:14.361517 kubelet[2767]: E0115 05:48:14.360558 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a" Jan 15 05:48:14.361671 kubelet[2767]: E0115 05:48:14.361627 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:48:16.382404 kubelet[2767]: I0115 05:48:16.382170 2767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 15 05:48:16.384291 kubelet[2767]: E0115 05:48:16.383722 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:17.372299 kubelet[2767]: E0115 05:48:17.372207 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:22.158532 containerd[1600]: time="2026-01-15T05:48:22.158478720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 05:48:22.220791 containerd[1600]: time="2026-01-15T05:48:22.220684308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:22.222604 containerd[1600]: time="2026-01-15T05:48:22.222462972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 05:48:22.222604 containerd[1600]: time="2026-01-15T05:48:22.222585981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:22.222989 kubelet[2767]: E0115 05:48:22.222813 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:48:22.223617 kubelet[2767]: E0115 05:48:22.223071 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:48:22.223617 kubelet[2767]: E0115 05:48:22.223324 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:73cf578236394d7092cc66aa5ac2392b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ncpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d459cf79d-58jzf_calico-system(9a14d1ad-fdab-4109-839d-24ca471bacb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:22.228256 containerd[1600]: time="2026-01-15T05:48:22.227506983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 05:48:22.285239 containerd[1600]: time="2026-01-15T05:48:22.285167679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:22.286893 containerd[1600]: time="2026-01-15T05:48:22.286783076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 05:48:22.286953 containerd[1600]: time="2026-01-15T05:48:22.286893262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:22.287204 kubelet[2767]: E0115 05:48:22.287162 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:48:22.287320 kubelet[2767]: E0115 05:48:22.287214 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:48:22.287489 kubelet[2767]: E0115 05:48:22.287439 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ncpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d459cf79d-58jzf_calico-system(9a14d1ad-fdab-4109-839d-24ca471bacb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:22.288789 kubelet[2767]: E0115 05:48:22.288662 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d459cf79d-58jzf" podUID="9a14d1ad-fdab-4109-839d-24ca471bacb8" Jan 15 05:48:24.148761 containerd[1600]: time="2026-01-15T05:48:24.148674078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:48:24.207122 containerd[1600]: time="2026-01-15T05:48:24.206855167Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:24.209373 containerd[1600]: time="2026-01-15T05:48:24.209311345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:48:24.209569 containerd[1600]: time="2026-01-15T05:48:24.209474209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:24.209819 kubelet[2767]: E0115 05:48:24.209756 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:24.210621 kubelet[2767]: E0115 05:48:24.209823 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:24.210621 kubelet[2767]: E0115 05:48:24.209997 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6bnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6988569c94-rjpxd_calico-apiserver(6709b21e-b302-4c1a-b8f9-4c50d880ff8b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:24.211647 kubelet[2767]: E0115 05:48:24.211449 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:48:25.150475 containerd[1600]: time="2026-01-15T05:48:25.149819652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:48:25.209832 containerd[1600]: time="2026-01-15T05:48:25.209759984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:25.211526 containerd[1600]: time="2026-01-15T05:48:25.211395533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:48:25.211526 containerd[1600]: time="2026-01-15T05:48:25.211433988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:25.211711 kubelet[2767]: E0115 05:48:25.211648 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:25.212225 kubelet[2767]: E0115 05:48:25.211723 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:25.212225 kubelet[2767]: E0115 05:48:25.212006 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stvc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6988569c94-7xzb7_calico-apiserver(6b6565ef-726d-4164-a834-22cf5e5bfe9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:25.212902 containerd[1600]: time="2026-01-15T05:48:25.212859798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 05:48:25.213417 kubelet[2767]: E0115 05:48:25.213296 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a" Jan 15 05:48:25.272745 containerd[1600]: time="2026-01-15T05:48:25.272498049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:25.274295 containerd[1600]: time="2026-01-15T05:48:25.274247604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 05:48:25.276395 containerd[1600]: time="2026-01-15T05:48:25.274334249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:25.276513 kubelet[2767]: E0115 05:48:25.275045 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:48:25.276513 kubelet[2767]: E0115 05:48:25.275258 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:48:25.276608 kubelet[2767]: E0115 05:48:25.276503 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtr9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dt9mp_calico-system(cdab6cdb-eee3-4132-9980-23cedc6f5612): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:25.282992 containerd[1600]: time="2026-01-15T05:48:25.282859405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 05:48:25.347788 containerd[1600]: time="2026-01-15T05:48:25.347553624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:25.350828 containerd[1600]: time="2026-01-15T05:48:25.350751010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 05:48:25.350940 containerd[1600]: time="2026-01-15T05:48:25.350788061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:25.351174 kubelet[2767]: E0115 05:48:25.351066 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:48:25.351246 kubelet[2767]: E0115 05:48:25.351184 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:48:25.352564 kubelet[2767]: E0115 05:48:25.351436 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtr9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dt9mp_calico-system(cdab6cdb-eee3-4132-9980-23cedc6f5612): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:25.353188 kubelet[2767]: E0115 05:48:25.353072 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:48:26.152800 containerd[1600]: time="2026-01-15T05:48:26.152703317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:48:26.212203 containerd[1600]: time="2026-01-15T05:48:26.212082830Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:26.213634 containerd[1600]: time="2026-01-15T05:48:26.213493402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:48:26.213634 containerd[1600]: time="2026-01-15T05:48:26.213603046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:26.213870 kubelet[2767]: E0115 05:48:26.213826 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:48:26.214460 kubelet[2767]: E0115 05:48:26.213881 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:48:26.214460 kubelet[2767]: E0115 05:48:26.214046 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blp4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-bf57cd658-gnjlw_calico-system(10bb0e11-4f57-4c87-8485-4dadf3148ce0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:26.216006 kubelet[2767]: E0115 05:48:26.215914 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:48:29.148482 containerd[1600]: time="2026-01-15T05:48:29.148288850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 05:48:29.211522 containerd[1600]: time="2026-01-15T05:48:29.211418826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:29.213068 containerd[1600]: time="2026-01-15T05:48:29.212963087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 05:48:29.213145 containerd[1600]: time="2026-01-15T05:48:29.213008779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:29.213371 kubelet[2767]: E0115 05:48:29.213295 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:48:29.213801 kubelet[2767]: E0115 05:48:29.213416 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:48:29.213801 kubelet[2767]: E0115 05:48:29.213584 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gpmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s4fx8_calico-system(1a790238-cc0e-45da-b99f-c6adf406e452): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:29.214982 kubelet[2767]: E0115 05:48:29.214826 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:48:37.149105 kubelet[2767]: E0115 05:48:37.148936 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d459cf79d-58jzf" podUID="9a14d1ad-fdab-4109-839d-24ca471bacb8" Jan 15 05:48:39.149106 kubelet[2767]: E0115 05:48:39.148944 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a" Jan 15 05:48:39.150496 kubelet[2767]: E0115 05:48:39.150435 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:48:41.149701 kubelet[2767]: E0115 05:48:41.149495 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:48:41.150725 kubelet[2767]: E0115 05:48:41.150265 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:48:42.149631 kubelet[2767]: E0115 05:48:42.149503 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:48:47.148411 kubelet[2767]: E0115 05:48:47.146631 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:50.148936 containerd[1600]: time="2026-01-15T05:48:50.148847409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 05:48:50.227975 containerd[1600]: time="2026-01-15T05:48:50.227736413Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:50.230173 containerd[1600]: time="2026-01-15T05:48:50.230126457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:50.230295 containerd[1600]: time="2026-01-15T05:48:50.230224536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 05:48:50.231481 kubelet[2767]: E0115 05:48:50.230924 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:48:50.231481 kubelet[2767]: E0115 05:48:50.230983 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:48:50.231481 kubelet[2767]: E0115 05:48:50.231199 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:73cf578236394d7092cc66aa5ac2392b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ncpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d459cf79d-58jzf_calico-system(9a14d1ad-fdab-4109-839d-24ca471bacb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:50.233803 containerd[1600]: time="2026-01-15T05:48:50.233758904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 05:48:50.314313 containerd[1600]: time="2026-01-15T05:48:50.314257723Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:50.322793 containerd[1600]: time="2026-01-15T05:48:50.321642467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 05:48:50.322793 containerd[1600]: time="2026-01-15T05:48:50.321751590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:50.322989 kubelet[2767]: E0115 05:48:50.322438 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:48:50.322989 kubelet[2767]: E0115 05:48:50.322586 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:48:50.324435 kubelet[2767]: E0115 05:48:50.323682 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ncpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d459cf79d-58jzf_calico-system(9a14d1ad-fdab-4109-839d-24ca471bacb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:50.326586 kubelet[2767]: E0115 05:48:50.326415 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d459cf79d-58jzf" podUID="9a14d1ad-fdab-4109-839d-24ca471bacb8" Jan 15 05:48:51.146786 kubelet[2767]: E0115 05:48:51.146697 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:51.158947 containerd[1600]: time="2026-01-15T05:48:51.158868833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:48:51.250002 containerd[1600]: time="2026-01-15T05:48:51.249590751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:51.251509 containerd[1600]: time="2026-01-15T05:48:51.251409186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:48:51.251736 containerd[1600]: time="2026-01-15T05:48:51.251625196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:51.252738 kubelet[2767]: E0115 05:48:51.252475 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:51.252738 kubelet[2767]: E0115 05:48:51.252531 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:51.254311 kubelet[2767]: E0115 05:48:51.252709 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6bnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6988569c94-rjpxd_calico-apiserver(6709b21e-b302-4c1a-b8f9-4c50d880ff8b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:51.254609 kubelet[2767]: E0115 05:48:51.254482 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:48:53.150244 containerd[1600]: time="2026-01-15T05:48:53.150049712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:48:53.212924 containerd[1600]: time="2026-01-15T05:48:53.212814240Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:53.214968 containerd[1600]: time="2026-01-15T05:48:53.214636473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:48:53.214968 containerd[1600]: time="2026-01-15T05:48:53.214800407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:53.215937 kubelet[2767]: E0115 05:48:53.215392 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:48:53.215937 kubelet[2767]: E0115 05:48:53.215459 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:48:53.215937 kubelet[2767]: E0115 05:48:53.215654 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blp4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-bf57cd658-gnjlw_calico-system(10bb0e11-4f57-4c87-8485-4dadf3148ce0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:53.217322 kubelet[2767]: E0115 05:48:53.217242 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:48:54.152687 containerd[1600]: time="2026-01-15T05:48:54.152565654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 05:48:54.232525 containerd[1600]: time="2026-01-15T05:48:54.232447739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:54.234548 containerd[1600]: time="2026-01-15T05:48:54.234464368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 05:48:54.234676 containerd[1600]: time="2026-01-15T05:48:54.234493117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:54.235121 kubelet[2767]: E0115 05:48:54.234983 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:48:54.235121 kubelet[2767]: E0115 05:48:54.235064 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:48:54.235781 kubelet[2767]: E0115 05:48:54.235541 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtr9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dt9mp_calico-system(cdab6cdb-eee3-4132-9980-23cedc6f5612): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:54.236604 containerd[1600]: time="2026-01-15T05:48:54.236561549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:48:54.294994 containerd[1600]: time="2026-01-15T05:48:54.294886453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:54.296620 containerd[1600]: time="2026-01-15T05:48:54.296510390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:48:54.296711 containerd[1600]: time="2026-01-15T05:48:54.296551315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:54.297941 kubelet[2767]: E0115 05:48:54.297859 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:54.297941 kubelet[2767]: E0115 05:48:54.297928 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:48:54.298403 kubelet[2767]: E0115 05:48:54.298177 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stvc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6988569c94-7xzb7_calico-apiserver(6b6565ef-726d-4164-a834-22cf5e5bfe9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:54.298522 containerd[1600]: time="2026-01-15T05:48:54.298403932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 05:48:54.299921 kubelet[2767]: E0115 05:48:54.299733 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a" Jan 15 05:48:54.369523 containerd[1600]: time="2026-01-15T05:48:54.369441202Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:54.371217 containerd[1600]: time="2026-01-15T05:48:54.371154011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 05:48:54.371329 containerd[1600]: time="2026-01-15T05:48:54.371259897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:54.371914 kubelet[2767]: E0115 05:48:54.371804 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:48:54.371914 kubelet[2767]: E0115 05:48:54.371876 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:48:54.372135 kubelet[2767]: E0115 05:48:54.372022 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtr9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dt9mp_calico-system(cdab6cdb-eee3-4132-9980-23cedc6f5612): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:54.373533 kubelet[2767]: E0115 05:48:54.373326 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:48:54.514941 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:50176.service - OpenSSH per-connection server daemon (10.0.0.1:50176). Jan 15 05:48:54.531123 kernel: kauditd_printk_skb: 210 callbacks suppressed Jan 15 05:48:54.531337 kernel: audit: type=1130 audit(1768456134.513:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.92:22-10.0.0.1:50176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:48:54.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.92:22-10.0.0.1:50176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:48:54.678000 audit[5006]: USER_ACCT pid=5006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.684541 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:48:54.692763 kernel: audit: type=1101 audit(1768456134.678:746): pid=5006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.692812 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 50176 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:48:54.681000 audit[5006]: CRED_ACQ pid=5006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.699584 systemd-logind[1577]: New session 9 of user core. Jan 15 05:48:54.713889 kernel: audit: type=1103 audit(1768456134.681:747): pid=5006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.713951 kernel: audit: type=1006 audit(1768456134.681:748): pid=5006 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 15 05:48:54.713978 kernel: audit: type=1300 audit(1768456134.681:748): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5607fb80 a2=3 a3=0 items=0 ppid=1 pid=5006 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:54.681000 audit[5006]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5607fb80 a2=3 a3=0 items=0 ppid=1 pid=5006 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:48:54.681000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:48:54.732823 kernel: audit: type=1327 audit(1768456134.681:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:48:54.733989 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 05:48:54.739000 audit[5006]: USER_START pid=5006 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.757243 kernel: audit: type=1105 audit(1768456134.739:749): pid=5006 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.757462 kernel: audit: type=1103 audit(1768456134.754:750): pid=5010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.754000 audit[5010]: CRED_ACQ pid=5010 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.915426 sshd[5010]: Connection closed by 10.0.0.1 port 50176 Jan 15 05:48:54.915753 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Jan 15 05:48:54.916000 audit[5006]: USER_END pid=5006 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.922758 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:50176.service: Deactivated successfully. Jan 15 05:48:54.923182 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Jan 15 05:48:54.926556 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 05:48:54.930816 systemd-logind[1577]: Removed session 9. Jan 15 05:48:54.916000 audit[5006]: CRED_DISP pid=5006 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.944283 kernel: audit: type=1106 audit(1768456134.916:751): pid=5006 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.944433 kernel: audit: type=1104 audit(1768456134.916:752): pid=5006 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:48:54.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.92:22-10.0.0.1:50176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:48:55.147769 containerd[1600]: time="2026-01-15T05:48:55.147506756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 05:48:55.215855 containerd[1600]: time="2026-01-15T05:48:55.215004623Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:48:55.218887 containerd[1600]: time="2026-01-15T05:48:55.218768944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 05:48:55.218887 containerd[1600]: time="2026-01-15T05:48:55.218854402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 05:48:55.219389 kubelet[2767]: E0115 05:48:55.219252 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:48:55.219389 kubelet[2767]: E0115 05:48:55.219309 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:48:55.220588 kubelet[2767]: E0115 05:48:55.220450 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gpmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s4fx8_calico-system(1a790238-cc0e-45da-b99f-c6adf406e452): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 05:48:55.222615 kubelet[2767]: E0115 05:48:55.221892 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:48:56.147134 kubelet[2767]: E0115 05:48:56.146914 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:48:59.938205 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:50188.service - OpenSSH per-connection server daemon (10.0.0.1:50188). Jan 15 05:48:59.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.92:22-10.0.0.1:50188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:48:59.941177 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:48:59.941255 kernel: audit: type=1130 audit(1768456139.937:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.92:22-10.0.0.1:50188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:00.022000 audit[5027]: USER_ACCT pid=5027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.024687 sshd[5027]: Accepted publickey for core from 10.0.0.1 port 50188 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:00.028601 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:00.025000 audit[5027]: CRED_ACQ pid=5027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.044065 systemd-logind[1577]: New session 10 of user core. Jan 15 05:49:00.045115 kernel: audit: type=1101 audit(1768456140.022:755): pid=5027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.045150 kernel: audit: type=1103 audit(1768456140.025:756): pid=5027 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.025000 audit[5027]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb3187a90 a2=3 a3=0 items=0 ppid=1 pid=5027 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:00.059082 kernel: audit: type=1006 audit(1768456140.025:757): pid=5027 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 15 05:49:00.059143 kernel: audit: type=1300 audit(1768456140.025:757): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb3187a90 a2=3 a3=0 items=0 ppid=1 pid=5027 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:00.059163 kernel: audit: type=1327 audit(1768456140.025:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:00.025000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:00.064544 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 05:49:00.067000 audit[5027]: USER_START pid=5027 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.068000 audit[5031]: CRED_ACQ pid=5031 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.090413 kernel: audit: type=1105 audit(1768456140.067:758): pid=5027 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.090533 kernel: audit: type=1103 audit(1768456140.068:759): pid=5031 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.195517 sshd[5031]: Connection closed by 10.0.0.1 port 50188 Jan 15 05:49:00.196160 sshd-session[5027]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:00.197000 audit[5027]: USER_END pid=5027 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.197000 audit[5027]: CRED_DISP pid=5027 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.219831 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:50188.service: Deactivated successfully. Jan 15 05:49:00.222339 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 05:49:00.224967 kernel: audit: type=1106 audit(1768456140.197:760): pid=5027 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.225079 kernel: audit: type=1104 audit(1768456140.197:761): pid=5027 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:00.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.92:22-10.0.0.1:50188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:00.229576 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Jan 15 05:49:00.232931 systemd-logind[1577]: Removed session 10. Jan 15 05:49:01.147136 kubelet[2767]: E0115 05:49:01.147085 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:49:01.152753 kubelet[2767]: E0115 05:49:01.151969 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d459cf79d-58jzf" podUID="9a14d1ad-fdab-4109-839d-24ca471bacb8" Jan 15 05:49:04.147566 kubelet[2767]: E0115 05:49:04.147480 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:49:05.151103 kubelet[2767]: E0115 05:49:05.150984 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:49:05.214500 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:37954.service - OpenSSH per-connection server daemon (10.0.0.1:37954). Jan 15 05:49:05.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.92:22-10.0.0.1:37954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:05.220420 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:49:05.220540 kernel: audit: type=1130 audit(1768456145.213:763): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.92:22-10.0.0.1:37954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:05.302000 audit[5046]: USER_ACCT pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.305092 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 37954 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:05.307580 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:05.320893 kernel: audit: type=1101 audit(1768456145.302:764): pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.320971 kernel: audit: type=1103 audit(1768456145.304:765): pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.304000 audit[5046]: CRED_ACQ pid=5046 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.316818 systemd-logind[1577]: New session 11 of user core. Jan 15 05:49:05.304000 audit[5046]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2dedf030 a2=3 a3=0 items=0 ppid=1 pid=5046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:05.334798 kernel: audit: type=1006 audit(1768456145.304:766): pid=5046 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 15 05:49:05.334903 kernel: audit: type=1300 audit(1768456145.304:766): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2dedf030 a2=3 a3=0 items=0 ppid=1 pid=5046 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:05.304000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:05.338172 kernel: audit: type=1327 audit(1768456145.304:766): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:05.338648 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 05:49:05.343000 audit[5046]: USER_START pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.355471 kernel: audit: type=1105 audit(1768456145.343:767): pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.350000 audit[5050]: CRED_ACQ pid=5050 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.363572 kernel: audit: type=1103 audit(1768456145.350:768): pid=5050 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.502053 sshd[5050]: Connection closed by 10.0.0.1 port 37954 Jan 15 05:49:05.502627 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:05.502000 audit[5046]: USER_END pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.503000 audit[5046]: CRED_DISP pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.517894 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:37954.service: Deactivated successfully. Jan 15 05:49:05.520766 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 05:49:05.522520 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Jan 15 05:49:05.524275 kernel: audit: type=1106 audit(1768456145.502:769): pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.524420 kernel: audit: type=1104 audit(1768456145.503:770): pid=5046 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:05.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.92:22-10.0.0.1:37954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:05.525574 systemd-logind[1577]: Removed session 11. Jan 15 05:49:06.147319 kubelet[2767]: E0115 05:49:06.147227 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:49:06.147682 kubelet[2767]: E0115 05:49:06.147630 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:49:09.148541 kubelet[2767]: E0115 05:49:09.148458 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a" Jan 15 05:49:10.516277 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:37960.service - OpenSSH per-connection server daemon (10.0.0.1:37960). Jan 15 05:49:10.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.92:22-10.0.0.1:37960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:10.519231 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:49:10.519302 kernel: audit: type=1130 audit(1768456150.515:772): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.92:22-10.0.0.1:37960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:10.611000 audit[5069]: USER_ACCT pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.613282 sshd[5069]: Accepted publickey for core from 10.0.0.1 port 37960 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:10.615720 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:10.612000 audit[5069]: CRED_ACQ pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.623059 systemd-logind[1577]: New session 12 of user core. Jan 15 05:49:10.630892 kernel: audit: type=1101 audit(1768456150.611:773): pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.630994 kernel: audit: type=1103 audit(1768456150.612:774): pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.631053 kernel: audit: type=1006 audit(1768456150.613:775): pid=5069 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 15 05:49:10.613000 audit[5069]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd308e6cf0 a2=3 a3=0 items=0 ppid=1 pid=5069 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:10.644625 kernel: audit: type=1300 audit(1768456150.613:775): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd308e6cf0 a2=3 a3=0 items=0 ppid=1 pid=5069 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:10.644748 kernel: audit: type=1327 audit(1768456150.613:775): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:10.613000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:10.652635 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 05:49:10.655000 audit[5069]: USER_START pid=5069 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.657000 audit[5073]: CRED_ACQ pid=5073 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.674798 kernel: audit: type=1105 audit(1768456150.655:776): pid=5069 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.674881 kernel: audit: type=1103 audit(1768456150.657:777): pid=5073 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.744744 sshd[5073]: Connection closed by 10.0.0.1 port 37960 Jan 15 05:49:10.747318 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:10.747000 audit[5069]: USER_END pid=5069 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.755761 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:37960.service: Deactivated successfully. Jan 15 05:49:10.758811 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 05:49:10.748000 audit[5069]: CRED_DISP pid=5069 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.760319 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Jan 15 05:49:10.763685 systemd-logind[1577]: Removed session 12. Jan 15 05:49:10.765824 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:37968.service - OpenSSH per-connection server daemon (10.0.0.1:37968). Jan 15 05:49:10.766750 kernel: audit: type=1106 audit(1768456150.747:778): pid=5069 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.766796 kernel: audit: type=1104 audit(1768456150.748:779): pid=5069 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.92:22-10.0.0.1:37960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:10.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.92:22-10.0.0.1:37968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:10.827000 audit[5087]: USER_ACCT pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.829746 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 37968 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:10.830000 audit[5087]: CRED_ACQ pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.830000 audit[5087]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee32f5eb0 a2=3 a3=0 items=0 ppid=1 pid=5087 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:10.830000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:10.833488 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:10.840814 systemd-logind[1577]: New session 13 of user core. Jan 15 05:49:10.851693 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 05:49:10.853000 audit[5087]: USER_START pid=5087 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:10.856000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:11.002553 sshd[5091]: Connection closed by 10.0.0.1 port 37968 Jan 15 05:49:11.002734 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:11.003000 audit[5087]: USER_END pid=5087 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:11.003000 audit[5087]: CRED_DISP pid=5087 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:11.014929 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:37968.service: Deactivated successfully. Jan 15 05:49:11.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.92:22-10.0.0.1:37968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:11.019055 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 05:49:11.021166 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Jan 15 05:49:11.029827 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:37984.service - OpenSSH per-connection server daemon (10.0.0.1:37984). Jan 15 05:49:11.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.92:22-10.0.0.1:37984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:11.032500 systemd-logind[1577]: Removed session 13. Jan 15 05:49:11.135000 audit[5103]: USER_ACCT pid=5103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:11.136941 sshd[5103]: Accepted publickey for core from 10.0.0.1 port 37984 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:11.136000 audit[5103]: CRED_ACQ pid=5103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:11.136000 audit[5103]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc9f05520 a2=3 a3=0 items=0 ppid=1 pid=5103 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:11.136000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:11.139526 sshd-session[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:11.147469 systemd-logind[1577]: New session 14 of user core. Jan 15 05:49:11.152598 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 05:49:11.156000 audit[5103]: USER_START pid=5103 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:11.159000 audit[5107]: CRED_ACQ pid=5107 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:11.258780 sshd[5107]: Connection closed by 10.0.0.1 port 37984 Jan 15 05:49:11.259795 sshd-session[5103]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:11.260000 audit[5103]: USER_END pid=5103 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:11.260000 audit[5103]: CRED_DISP pid=5103 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:11.265671 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Jan 15 05:49:11.266672 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:37984.service: Deactivated successfully. Jan 15 05:49:11.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.92:22-10.0.0.1:37984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:11.269713 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 05:49:11.272885 systemd-logind[1577]: Removed session 14. Jan 15 05:49:12.150057 kubelet[2767]: E0115 05:49:12.149977 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:49:13.149063 kubelet[2767]: E0115 05:49:13.148920 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d459cf79d-58jzf" podUID="9a14d1ad-fdab-4109-839d-24ca471bacb8" Jan 15 05:49:16.281617 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:33020.service - OpenSSH per-connection server daemon (10.0.0.1:33020). Jan 15 05:49:16.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.92:22-10.0.0.1:33020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:16.284071 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 15 05:49:16.284184 kernel: audit: type=1130 audit(1768456156.280:799): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.92:22-10.0.0.1:33020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:16.370000 audit[5124]: USER_ACCT pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:16.372052 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 33020 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:16.375149 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:16.381964 systemd-logind[1577]: New session 15 of user core. Jan 15 05:49:16.371000 audit[5124]: CRED_ACQ pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:16.395410 kernel: audit: type=1101 audit(1768456156.370:800): pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:16.395485 kernel: audit: type=1103 audit(1768456156.371:801): pid=5124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:16.395525 kernel: audit: type=1006 audit(1768456156.371:802): pid=5124 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 15 05:49:16.371000 audit[5124]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd22631e20 a2=3 a3=0 items=0 ppid=1 pid=5124 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:16.415333 kernel: audit: type=1300 audit(1768456156.371:802): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd22631e20 a2=3 a3=0 items=0 ppid=1 pid=5124 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:16.371000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:16.420539 kernel: audit: type=1327 audit(1768456156.371:802): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:16.427699 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 05:49:16.430000 audit[5124]: USER_START pid=5124 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:16.433000 audit[5128]: CRED_ACQ pid=5128 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:16.454732 kernel: audit: type=1105 audit(1768456156.430:803): pid=5124 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:16.455261 kernel: audit: type=1103 audit(1768456156.433:804): pid=5128 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:16.566122 sshd[5128]: Connection closed by 10.0.0.1 port 33020 Jan 15 05:49:16.566682 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:16.567000 audit[5124]: USER_END pid=5124 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:16.572799 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:33020.service: Deactivated successfully. Jan 15 05:49:16.576205 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 05:49:16.578095 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Jan 15 05:49:16.580788 systemd-logind[1577]: Removed session 15. Jan 15 05:49:16.581482 kernel: audit: type=1106 audit(1768456156.567:805): pid=5124 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:16.567000 audit[5124]: CRED_DISP pid=5124 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:16.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.92:22-10.0.0.1:33020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:16.593396 kernel: audit: type=1104 audit(1768456156.567:806): pid=5124 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:17.147099 kubelet[2767]: E0115 05:49:17.146962 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:49:17.147915 kubelet[2767]: E0115 05:49:17.147874 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:49:20.148236 kubelet[2767]: E0115 05:49:20.148182 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:49:21.147657 kubelet[2767]: E0115 05:49:21.147532 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:49:21.148218 kubelet[2767]: E0115 05:49:21.148105 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a" Jan 15 05:49:21.580910 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:33034.service - OpenSSH per-connection server daemon (10.0.0.1:33034). Jan 15 05:49:21.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.92:22-10.0.0.1:33034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:21.583543 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:49:21.583652 kernel: audit: type=1130 audit(1768456161.579:808): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.92:22-10.0.0.1:33034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:21.655000 audit[5167]: USER_ACCT pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.657709 sshd[5167]: Accepted publickey for core from 10.0.0.1 port 33034 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:21.660605 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:21.666540 kernel: audit: type=1101 audit(1768456161.655:809): pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.666625 kernel: audit: type=1103 audit(1768456161.656:810): pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.656000 audit[5167]: CRED_ACQ pid=5167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.668893 systemd-logind[1577]: New session 16 of user core. Jan 15 05:49:21.680743 kernel: audit: type=1006 audit(1768456161.656:811): pid=5167 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 15 05:49:21.691458 kernel: audit: type=1300 audit(1768456161.656:811): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc25502d30 a2=3 a3=0 items=0 ppid=1 pid=5167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:21.656000 audit[5167]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc25502d30 a2=3 a3=0 items=0 ppid=1 pid=5167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:21.656000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:21.696426 kernel: audit: type=1327 audit(1768456161.656:811): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:21.696796 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 05:49:21.703000 audit[5167]: USER_START pid=5167 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.707000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.724131 kernel: audit: type=1105 audit(1768456161.703:812): pid=5167 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.724209 kernel: audit: type=1103 audit(1768456161.707:813): pid=5171 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.872551 sshd[5171]: Connection closed by 10.0.0.1 port 33034 Jan 15 05:49:21.874298 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:21.875000 audit[5167]: USER_END pid=5167 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.897593 kernel: audit: type=1106 audit(1768456161.875:814): pid=5167 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.897690 kernel: audit: type=1104 audit(1768456161.875:815): pid=5167 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.875000 audit[5167]: CRED_DISP pid=5167 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:21.895828 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Jan 15 05:49:21.896947 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:33034.service: Deactivated successfully. Jan 15 05:49:21.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.92:22-10.0.0.1:33034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:21.900269 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 05:49:21.904806 systemd-logind[1577]: Removed session 16. Jan 15 05:49:23.147320 kubelet[2767]: E0115 05:49:23.147196 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:49:25.148945 kubelet[2767]: E0115 05:49:25.148832 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d459cf79d-58jzf" podUID="9a14d1ad-fdab-4109-839d-24ca471bacb8" Jan 15 05:49:26.147246 kubelet[2767]: E0115 05:49:26.147153 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:49:26.894816 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:47092.service - OpenSSH per-connection server daemon (10.0.0.1:47092). Jan 15 05:49:26.900641 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:49:26.900899 kernel: audit: type=1130 audit(1768456166.893:817): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.92:22-10.0.0.1:47092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:26.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.92:22-10.0.0.1:47092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:26.983000 audit[5185]: USER_ACCT pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:26.985058 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 47092 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:26.987690 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:26.985000 audit[5185]: CRED_ACQ pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:26.997084 systemd-logind[1577]: New session 17 of user core. Jan 15 05:49:27.002653 kernel: audit: type=1101 audit(1768456166.983:818): pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:27.002737 kernel: audit: type=1103 audit(1768456166.985:819): pid=5185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:27.002802 kernel: audit: type=1006 audit(1768456166.985:820): pid=5185 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 15 05:49:27.007813 kernel: audit: type=1300 audit(1768456166.985:820): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6a5b93a0 a2=3 a3=0 items=0 ppid=1 pid=5185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:26.985000 audit[5185]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6a5b93a0 a2=3 a3=0 items=0 ppid=1 pid=5185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:27.021730 kernel: audit: type=1327 audit(1768456166.985:820): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:26.985000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:27.030667 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 05:49:27.052071 kernel: audit: type=1105 audit(1768456167.035:821): pid=5185 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:27.035000 audit[5185]: USER_START pid=5185 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:27.040000 audit[5189]: CRED_ACQ pid=5189 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:27.062471 kernel: audit: type=1103 audit(1768456167.040:822): pid=5189 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:27.198290 sshd[5189]: Connection closed by 10.0.0.1 port 47092 Jan 15 05:49:27.199520 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:27.201000 audit[5185]: USER_END pid=5185 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:27.213959 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:47092.service: Deactivated successfully. Jan 15 05:49:27.214435 kernel: audit: type=1106 audit(1768456167.201:823): pid=5185 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:27.216324 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 05:49:27.201000 audit[5185]: CRED_DISP pid=5185 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:27.222790 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Jan 15 05:49:27.224452 kernel: audit: type=1104 audit(1768456167.201:824): pid=5185 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:27.224837 systemd-logind[1577]: Removed session 17. Jan 15 05:49:27.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.92:22-10.0.0.1:47092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:28.146418 kubelet[2767]: E0115 05:49:28.146252 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:49:28.148043 kubelet[2767]: E0115 05:49:28.147841 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:49:32.150994 kubelet[2767]: E0115 05:49:32.150880 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:49:32.151890 kubelet[2767]: E0115 05:49:32.151562 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:49:32.228485 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:47096.service - OpenSSH per-connection server daemon (10.0.0.1:47096). Jan 15 05:49:32.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.92:22-10.0.0.1:47096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:32.230715 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:49:32.230776 kernel: audit: type=1130 audit(1768456172.228:826): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.92:22-10.0.0.1:47096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:32.321000 audit[5211]: USER_ACCT pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.323569 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 47096 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:32.326646 sshd-session[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:32.324000 audit[5211]: CRED_ACQ pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.338614 systemd-logind[1577]: New session 18 of user core. Jan 15 05:49:32.341854 kernel: audit: type=1101 audit(1768456172.321:827): pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.341916 kernel: audit: type=1103 audit(1768456172.324:828): pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.341966 kernel: audit: type=1006 audit(1768456172.324:829): pid=5211 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 15 05:49:32.348493 kernel: audit: type=1300 audit(1768456172.324:829): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc56640a0 a2=3 a3=0 items=0 ppid=1 pid=5211 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:32.324000 audit[5211]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc56640a0 a2=3 a3=0 items=0 ppid=1 pid=5211 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:32.324000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:32.357630 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 05:49:32.361392 kernel: audit: type=1327 audit(1768456172.324:829): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:32.361000 audit[5211]: USER_START pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.365000 audit[5215]: CRED_ACQ pid=5215 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.383574 kernel: audit: type=1105 audit(1768456172.361:830): pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.383743 kernel: audit: type=1103 audit(1768456172.365:831): pid=5215 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.521909 sshd[5215]: Connection closed by 10.0.0.1 port 47096 Jan 15 05:49:32.522603 sshd-session[5211]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:32.524000 audit[5211]: USER_END pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.528227 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:47096.service: Deactivated successfully. Jan 15 05:49:32.531393 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 05:49:32.534446 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Jan 15 05:49:32.536387 systemd-logind[1577]: Removed session 18. Jan 15 05:49:32.524000 audit[5211]: CRED_DISP pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.546242 kernel: audit: type=1106 audit(1768456172.524:832): pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.546598 kernel: audit: type=1104 audit(1768456172.524:833): pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:32.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.92:22-10.0.0.1:47096 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:33.147184 kubelet[2767]: E0115 05:49:33.146978 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:49:35.147815 containerd[1600]: time="2026-01-15T05:49:35.147759109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:49:35.222451 containerd[1600]: time="2026-01-15T05:49:35.222394480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:49:35.223978 containerd[1600]: time="2026-01-15T05:49:35.223871157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:49:35.223978 containerd[1600]: time="2026-01-15T05:49:35.223955434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:49:35.224516 kubelet[2767]: E0115 05:49:35.224219 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:49:35.224516 kubelet[2767]: E0115 05:49:35.224261 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:49:35.224516 kubelet[2767]: E0115 05:49:35.224442 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stvc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6988569c94-7xzb7_calico-apiserver(6b6565ef-726d-4164-a834-22cf5e5bfe9a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:49:35.226573 kubelet[2767]: E0115 05:49:35.226525 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a" Jan 15 05:49:37.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.92:22-10.0.0.1:40550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:37.541728 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:40550.service - OpenSSH per-connection server daemon (10.0.0.1:40550). Jan 15 05:49:37.545607 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:49:37.545750 kernel: audit: type=1130 audit(1768456177.541:835): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.92:22-10.0.0.1:40550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:37.605000 audit[5232]: USER_ACCT pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.606530 sshd[5232]: Accepted publickey for core from 10.0.0.1 port 40550 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:37.608764 sshd-session[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:37.607000 audit[5232]: CRED_ACQ pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.616242 systemd-logind[1577]: New session 19 of user core. Jan 15 05:49:37.624917 kernel: audit: type=1101 audit(1768456177.605:836): pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.624993 kernel: audit: type=1103 audit(1768456177.607:837): pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.625025 kernel: audit: type=1006 audit(1768456177.607:838): pid=5232 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 15 05:49:37.632263 kernel: audit: type=1300 audit(1768456177.607:838): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef28255b0 a2=3 a3=0 items=0 ppid=1 pid=5232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:37.607000 audit[5232]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef28255b0 a2=3 a3=0 items=0 ppid=1 pid=5232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:37.607000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:37.648654 kernel: audit: type=1327 audit(1768456177.607:838): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:37.653751 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 05:49:37.656000 audit[5232]: USER_START pid=5232 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.660000 audit[5236]: CRED_ACQ pid=5236 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.682570 kernel: audit: type=1105 audit(1768456177.656:839): pid=5232 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.682731 kernel: audit: type=1103 audit(1768456177.660:840): pid=5236 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.769466 sshd[5236]: Connection closed by 10.0.0.1 port 40550 Jan 15 05:49:37.770273 sshd-session[5232]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:37.776000 audit[5232]: USER_END pid=5232 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.781032 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:40550.service: Deactivated successfully. Jan 15 05:49:37.783481 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 05:49:37.787478 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Jan 15 05:49:37.788454 kernel: audit: type=1106 audit(1768456177.776:841): pid=5232 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.788593 kernel: audit: type=1104 audit(1768456177.776:842): pid=5232 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.776000 audit[5232]: CRED_DISP pid=5232 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.789157 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:40560.service - OpenSSH per-connection server daemon (10.0.0.1:40560). Jan 15 05:49:37.790884 systemd-logind[1577]: Removed session 19. Jan 15 05:49:37.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.92:22-10.0.0.1:40550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:37.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.92:22-10.0.0.1:40560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:37.879000 audit[5249]: USER_ACCT pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.879751 sshd[5249]: Accepted publickey for core from 10.0.0.1 port 40560 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:37.881000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.881000 audit[5249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd3be3fd0 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:37.881000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:37.883670 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:37.891883 systemd-logind[1577]: New session 20 of user core. Jan 15 05:49:37.901617 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 05:49:37.904000 audit[5249]: USER_START pid=5249 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:37.907000 audit[5253]: CRED_ACQ pid=5253 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:38.204775 sshd[5253]: Connection closed by 10.0.0.1 port 40560 Jan 15 05:49:38.207632 sshd-session[5249]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:38.214000 audit[5249]: USER_END pid=5249 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:38.218000 audit[5249]: CRED_DISP pid=5249 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:38.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.92:22-10.0.0.1:40564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:38.225400 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:40564.service - OpenSSH per-connection server daemon (10.0.0.1:40564). Jan 15 05:49:38.229984 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:40560.service: Deactivated successfully. Jan 15 05:49:38.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.92:22-10.0.0.1:40560 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:38.233894 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 05:49:38.237552 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Jan 15 05:49:38.242426 systemd-logind[1577]: Removed session 20. Jan 15 05:49:38.299000 audit[5261]: USER_ACCT pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:38.300411 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 40564 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:38.301000 audit[5261]: CRED_ACQ pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:38.301000 audit[5261]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd00144460 a2=3 a3=0 items=0 ppid=1 pid=5261 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:38.301000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:38.303569 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:38.310988 systemd-logind[1577]: New session 21 of user core. Jan 15 05:49:38.322711 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 05:49:38.326000 audit[5261]: USER_START pid=5261 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:38.330000 audit[5268]: CRED_ACQ pid=5268 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.022000 audit[5283]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:49:39.022000 audit[5283]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd0c703a00 a2=0 a3=7ffd0c7039ec items=0 ppid=2920 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:39.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:49:39.034000 audit[5283]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:49:39.034000 audit[5283]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd0c703a00 a2=0 a3=0 items=0 ppid=2920 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:39.034000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:49:39.041780 sshd[5268]: Connection closed by 10.0.0.1 port 40564 Jan 15 05:49:39.043513 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:39.048000 audit[5261]: USER_END pid=5261 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.048000 audit[5261]: CRED_DISP pid=5261 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.062022 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:40564.service: Deactivated successfully. Jan 15 05:49:39.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.92:22-10.0.0.1:40564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:39.068313 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 05:49:39.068000 audit[5286]: NETFILTER_CFG table=filter:144 family=2 entries=38 op=nft_register_rule pid=5286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:49:39.068000 audit[5286]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcb28c86d0 a2=0 a3=7ffcb28c86bc items=0 ppid=2920 pid=5286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:39.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:49:39.070118 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Jan 15 05:49:39.074638 systemd-logind[1577]: Removed session 21. Jan 15 05:49:39.077031 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:40578.service - OpenSSH per-connection server daemon (10.0.0.1:40578). Jan 15 05:49:39.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.92:22-10.0.0.1:40578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:39.075000 audit[5286]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:49:39.075000 audit[5286]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcb28c86d0 a2=0 a3=0 items=0 ppid=2920 pid=5286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:39.075000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:49:39.149600 containerd[1600]: time="2026-01-15T05:49:39.149515243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 05:49:39.163000 audit[5290]: USER_ACCT pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.164326 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 40578 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:39.165000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.165000 audit[5290]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8cd3a5d0 a2=3 a3=0 items=0 ppid=1 pid=5290 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:39.165000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:39.167286 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:39.177441 systemd-logind[1577]: New session 22 of user core. Jan 15 05:49:39.180624 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 05:49:39.184000 audit[5290]: USER_START pid=5290 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.188000 audit[5294]: CRED_ACQ pid=5294 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.229397 containerd[1600]: time="2026-01-15T05:49:39.229288575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:49:39.233067 containerd[1600]: time="2026-01-15T05:49:39.232905866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 05:49:39.233442 kubelet[2767]: E0115 05:49:39.233304 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:49:39.234269 kubelet[2767]: E0115 05:49:39.234227 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 05:49:39.234607 kubelet[2767]: E0115 05:49:39.234526 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:73cf578236394d7092cc66aa5ac2392b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ncpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d459cf79d-58jzf_calico-system(9a14d1ad-fdab-4109-839d-24ca471bacb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 05:49:39.244541 containerd[1600]: time="2026-01-15T05:49:39.232972590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 15 05:49:39.244541 containerd[1600]: time="2026-01-15T05:49:39.235201308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 05:49:39.310753 containerd[1600]: time="2026-01-15T05:49:39.310636362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:49:39.312652 containerd[1600]: time="2026-01-15T05:49:39.312579820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 15 05:49:39.313280 containerd[1600]: time="2026-01-15T05:49:39.313125644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 05:49:39.313735 kubelet[2767]: E0115 05:49:39.313672 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:49:39.313830 kubelet[2767]: E0115 05:49:39.313749 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 05:49:39.314705 kubelet[2767]: E0115 05:49:39.314608 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6bnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6988569c94-rjpxd_calico-apiserver(6709b21e-b302-4c1a-b8f9-4c50d880ff8b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 05:49:39.316224 containerd[1600]: time="2026-01-15T05:49:39.315089150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 05:49:39.316679 kubelet[2767]: E0115 05:49:39.316627 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:49:39.377105 containerd[1600]: time="2026-01-15T05:49:39.376960087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:49:39.379634 containerd[1600]: time="2026-01-15T05:49:39.379468603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 05:49:39.379634 containerd[1600]: time="2026-01-15T05:49:39.379522686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 15 05:49:39.379840 kubelet[2767]: E0115 05:49:39.379801 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:49:39.379896 kubelet[2767]: E0115 05:49:39.379861 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 05:49:39.380200 kubelet[2767]: E0115 05:49:39.379992 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ncpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d459cf79d-58jzf_calico-system(9a14d1ad-fdab-4109-839d-24ca471bacb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 05:49:39.381517 kubelet[2767]: E0115 05:49:39.381335 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d459cf79d-58jzf" podUID="9a14d1ad-fdab-4109-839d-24ca471bacb8" Jan 15 05:49:39.475580 sshd[5294]: Connection closed by 10.0.0.1 port 40578 Jan 15 05:49:39.475541 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:39.479000 audit[5290]: USER_END pid=5290 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.480000 audit[5290]: CRED_DISP pid=5290 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.493445 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:40578.service: Deactivated successfully. Jan 15 05:49:39.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.92:22-10.0.0.1:40578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:39.496689 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 05:49:39.499196 systemd-logind[1577]: Session 22 logged out. Waiting for processes to exit. Jan 15 05:49:39.504628 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:40586.service - OpenSSH per-connection server daemon (10.0.0.1:40586). Jan 15 05:49:39.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.92:22-10.0.0.1:40586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:39.505603 systemd-logind[1577]: Removed session 22. Jan 15 05:49:39.585000 audit[5305]: USER_ACCT pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.586438 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 40586 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:39.587000 audit[5305]: CRED_ACQ pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.587000 audit[5305]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc06e1c8c0 a2=3 a3=0 items=0 ppid=1 pid=5305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:39.587000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:39.589471 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:39.597806 systemd-logind[1577]: New session 23 of user core. Jan 15 05:49:39.612672 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 05:49:39.616000 audit[5305]: USER_START pid=5305 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.619000 audit[5309]: CRED_ACQ pid=5309 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.784782 sshd[5309]: Connection closed by 10.0.0.1 port 40586 Jan 15 05:49:39.787655 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:39.789000 audit[5305]: USER_END pid=5305 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.790000 audit[5305]: CRED_DISP pid=5305 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:39.794645 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:40586.service: Deactivated successfully. Jan 15 05:49:39.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.92:22-10.0.0.1:40586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:39.798557 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 05:49:39.800277 systemd-logind[1577]: Session 23 logged out. Waiting for processes to exit. Jan 15 05:49:39.805142 systemd-logind[1577]: Removed session 23. Jan 15 05:49:44.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.92:22-10.0.0.1:34462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:44.805438 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:34462.service - OpenSSH per-connection server daemon (10.0.0.1:34462). Jan 15 05:49:44.808264 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 15 05:49:44.808567 kernel: audit: type=1130 audit(1768456184.805:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.92:22-10.0.0.1:34462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:44.884000 audit[5343]: USER_ACCT pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:44.885132 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 34462 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:44.887748 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:44.885000 audit[5343]: CRED_ACQ pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:44.896915 systemd-logind[1577]: New session 24 of user core. Jan 15 05:49:44.907394 kernel: audit: type=1101 audit(1768456184.884:885): pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:44.907468 kernel: audit: type=1103 audit(1768456184.885:886): pid=5343 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:44.913184 kernel: audit: type=1006 audit(1768456184.886:887): pid=5343 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 15 05:49:44.886000 audit[5343]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3ad17870 a2=3 a3=0 items=0 ppid=1 pid=5343 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:44.926309 kernel: audit: type=1300 audit(1768456184.886:887): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3ad17870 a2=3 a3=0 items=0 ppid=1 pid=5343 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:44.886000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:44.928252 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 05:49:44.931627 kernel: audit: type=1327 audit(1768456184.886:887): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:44.935000 audit[5343]: USER_START pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:44.938000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:44.959180 kernel: audit: type=1105 audit(1768456184.935:888): pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:44.959243 kernel: audit: type=1103 audit(1768456184.938:889): pid=5347 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:45.046222 sshd[5347]: Connection closed by 10.0.0.1 port 34462 Jan 15 05:49:45.046764 sshd-session[5343]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:45.048000 audit[5343]: USER_END pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:45.053249 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:34462.service: Deactivated successfully. Jan 15 05:49:45.057726 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 05:49:45.061981 systemd-logind[1577]: Session 24 logged out. Waiting for processes to exit. Jan 15 05:49:45.064586 kernel: audit: type=1106 audit(1768456185.048:890): pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:45.064645 systemd-logind[1577]: Removed session 24. Jan 15 05:49:45.049000 audit[5343]: CRED_DISP pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:45.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.92:22-10.0.0.1:34462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:45.076483 kernel: audit: type=1104 audit(1768456185.049:891): pid=5343 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:45.151337 containerd[1600]: time="2026-01-15T05:49:45.151250530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 05:49:45.220836 containerd[1600]: time="2026-01-15T05:49:45.220774363Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:49:45.222699 containerd[1600]: time="2026-01-15T05:49:45.222457757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 05:49:45.222699 containerd[1600]: time="2026-01-15T05:49:45.222557752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 15 05:49:45.222997 kubelet[2767]: E0115 05:49:45.222924 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:49:45.223661 kubelet[2767]: E0115 05:49:45.223008 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 05:49:45.223661 kubelet[2767]: E0115 05:49:45.223262 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtr9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dt9mp_calico-system(cdab6cdb-eee3-4132-9980-23cedc6f5612): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 05:49:45.224038 containerd[1600]: time="2026-01-15T05:49:45.223934001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 05:49:45.280810 containerd[1600]: time="2026-01-15T05:49:45.280710119Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:49:45.282222 containerd[1600]: time="2026-01-15T05:49:45.282061805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 05:49:45.282585 containerd[1600]: time="2026-01-15T05:49:45.282220924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 15 05:49:45.283465 kubelet[2767]: E0115 05:49:45.283206 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:49:45.283465 kubelet[2767]: E0115 05:49:45.283323 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 05:49:45.283950 kubelet[2767]: E0115 05:49:45.283758 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gpmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-s4fx8_calico-system(1a790238-cc0e-45da-b99f-c6adf406e452): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 05:49:45.284764 containerd[1600]: time="2026-01-15T05:49:45.284645421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 05:49:45.285042 kubelet[2767]: E0115 05:49:45.284990 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:49:45.356299 containerd[1600]: time="2026-01-15T05:49:45.356134006Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:49:45.358317 containerd[1600]: time="2026-01-15T05:49:45.358199876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 05:49:45.358317 containerd[1600]: time="2026-01-15T05:49:45.358274217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 15 05:49:45.358800 kubelet[2767]: E0115 05:49:45.358711 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:49:45.358800 kubelet[2767]: E0115 05:49:45.358794 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 05:49:45.359056 kubelet[2767]: E0115 05:49:45.359013 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtr9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dt9mp_calico-system(cdab6cdb-eee3-4132-9980-23cedc6f5612): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 05:49:45.360766 kubelet[2767]: E0115 05:49:45.360703 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:49:47.149822 containerd[1600]: time="2026-01-15T05:49:47.149144683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 05:49:47.215511 containerd[1600]: time="2026-01-15T05:49:47.215423720Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 05:49:47.217488 containerd[1600]: time="2026-01-15T05:49:47.217336635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 05:49:47.217581 containerd[1600]: time="2026-01-15T05:49:47.217514584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 15 05:49:47.217810 kubelet[2767]: E0115 05:49:47.217744 2767 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:49:47.218557 kubelet[2767]: E0115 05:49:47.217830 2767 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 05:49:47.218557 kubelet[2767]: E0115 05:49:47.218133 2767 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blp4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-bf57cd658-gnjlw_calico-system(10bb0e11-4f57-4c87-8485-4dadf3148ce0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 05:49:47.219957 kubelet[2767]: E0115 05:49:47.219897 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:49:48.147728 kubelet[2767]: E0115 05:49:48.147574 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a" Jan 15 05:49:48.469000 audit[5387]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:49:48.469000 audit[5387]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffea6037750 a2=0 a3=7ffea603773c items=0 ppid=2920 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:48.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:49:48.480000 audit[5387]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=5387 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 15 05:49:48.480000 audit[5387]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffea6037750 a2=0 a3=7ffea603773c items=0 ppid=2920 pid=5387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:48.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 15 05:49:50.068306 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:34472.service - OpenSSH per-connection server daemon (10.0.0.1:34472). Jan 15 05:49:50.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.92:22-10.0.0.1:34472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:50.071161 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 15 05:49:50.071236 kernel: audit: type=1130 audit(1768456190.068:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.92:22-10.0.0.1:34472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:50.150000 audit[5389]: USER_ACCT pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.161762 systemd-logind[1577]: New session 25 of user core. Jan 15 05:49:50.164412 kernel: audit: type=1101 audit(1768456190.150:896): pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.154686 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:50.164809 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 34472 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:50.152000 audit[5389]: CRED_ACQ pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.180939 kernel: audit: type=1103 audit(1768456190.152:897): pid=5389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.181311 kernel: audit: type=1006 audit(1768456190.152:898): pid=5389 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 15 05:49:50.182446 kernel: audit: type=1300 audit(1768456190.152:898): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce1f19730 a2=3 a3=0 items=0 ppid=1 pid=5389 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:50.152000 audit[5389]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce1f19730 a2=3 a3=0 items=0 ppid=1 pid=5389 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:50.152000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:50.190768 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 15 05:49:50.193114 kernel: audit: type=1327 audit(1768456190.152:898): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:50.198000 audit[5389]: USER_START pid=5389 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.210787 kernel: audit: type=1105 audit(1768456190.198:899): pid=5389 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.200000 audit[5393]: CRED_ACQ pid=5393 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.220447 kernel: audit: type=1103 audit(1768456190.200:900): pid=5393 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.346213 sshd[5393]: Connection closed by 10.0.0.1 port 34472 Jan 15 05:49:50.348465 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:50.350000 audit[5389]: USER_END pid=5389 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.356444 systemd-logind[1577]: Session 25 logged out. Waiting for processes to exit. Jan 15 05:49:50.356656 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:34472.service: Deactivated successfully. Jan 15 05:49:50.360847 systemd[1]: session-25.scope: Deactivated successfully. Jan 15 05:49:50.364565 systemd-logind[1577]: Removed session 25. Jan 15 05:49:50.350000 audit[5389]: CRED_DISP pid=5389 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.378060 kernel: audit: type=1106 audit(1768456190.350:901): pid=5389 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.378159 kernel: audit: type=1104 audit(1768456190.350:902): pid=5389 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:50.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.92:22-10.0.0.1:34472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:51.152129 kubelet[2767]: E0115 05:49:51.151943 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d459cf79d-58jzf" podUID="9a14d1ad-fdab-4109-839d-24ca471bacb8" Jan 15 05:49:51.154278 kubelet[2767]: E0115 05:49:51.152809 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-rjpxd" podUID="6709b21e-b302-4c1a-b8f9-4c50d880ff8b" Jan 15 05:49:55.362073 systemd[1]: Started sshd@24-10.0.0.92:22-10.0.0.1:34424.service - OpenSSH per-connection server daemon (10.0.0.1:34424). Jan 15 05:49:55.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.92:22-10.0.0.1:34424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:55.374414 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:49:55.374495 kernel: audit: type=1130 audit(1768456195.361:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.92:22-10.0.0.1:34424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:55.436000 audit[5407]: USER_ACCT pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.440275 sshd-session[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:49:55.441428 sshd[5407]: Accepted publickey for core from 10.0.0.1 port 34424 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:49:55.455416 kernel: audit: type=1101 audit(1768456195.436:905): pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.455497 kernel: audit: type=1103 audit(1768456195.436:906): pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.436000 audit[5407]: CRED_ACQ pid=5407 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.449796 systemd-logind[1577]: New session 26 of user core. Jan 15 05:49:55.462584 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 15 05:49:55.469033 kernel: audit: type=1006 audit(1768456195.436:907): pid=5407 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 15 05:49:55.469090 kernel: audit: type=1300 audit(1768456195.436:907): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0c210a40 a2=3 a3=0 items=0 ppid=1 pid=5407 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:55.436000 audit[5407]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0c210a40 a2=3 a3=0 items=0 ppid=1 pid=5407 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:49:55.436000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:55.462000 audit[5407]: USER_START pid=5407 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.503756 kernel: audit: type=1327 audit(1768456195.436:907): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:49:55.503840 kernel: audit: type=1105 audit(1768456195.462:908): pid=5407 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.503861 kernel: audit: type=1103 audit(1768456195.473:909): pid=5411 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.473000 audit[5411]: CRED_ACQ pid=5411 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.557591 sshd[5411]: Connection closed by 10.0.0.1 port 34424 Jan 15 05:49:55.558006 sshd-session[5407]: pam_unix(sshd:session): session closed for user core Jan 15 05:49:55.559000 audit[5407]: USER_END pid=5407 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.567864 systemd[1]: sshd@24-10.0.0.92:22-10.0.0.1:34424.service: Deactivated successfully. Jan 15 05:49:55.572762 systemd[1]: session-26.scope: Deactivated successfully. Jan 15 05:49:55.574473 kernel: audit: type=1106 audit(1768456195.559:910): pid=5407 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.574552 kernel: audit: type=1104 audit(1768456195.560:911): pid=5407 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.560000 audit[5407]: CRED_DISP pid=5407 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:49:55.576067 systemd-logind[1577]: Session 26 logged out. Waiting for processes to exit. Jan 15 05:49:55.578330 systemd-logind[1577]: Removed session 26. Jan 15 05:49:55.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.92:22-10.0.0.1:34424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:49:56.147523 kubelet[2767]: E0115 05:49:56.147164 2767 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 15 05:50:00.148090 kubelet[2767]: E0115 05:50:00.147939 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-s4fx8" podUID="1a790238-cc0e-45da-b99f-c6adf406e452" Jan 15 05:50:00.148925 kubelet[2767]: E0115 05:50:00.148756 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bf57cd658-gnjlw" podUID="10bb0e11-4f57-4c87-8485-4dadf3148ce0" Jan 15 05:50:00.149786 kubelet[2767]: E0115 05:50:00.149714 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dt9mp" podUID="cdab6cdb-eee3-4132-9980-23cedc6f5612" Jan 15 05:50:00.574762 systemd[1]: Started sshd@25-10.0.0.92:22-10.0.0.1:34440.service - OpenSSH per-connection server daemon (10.0.0.1:34440). Jan 15 05:50:00.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.92:22-10.0.0.1:34440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:50:00.578086 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 15 05:50:00.578242 kernel: audit: type=1130 audit(1768456200.574:913): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.92:22-10.0.0.1:34440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:50:00.649000 audit[5425]: USER_ACCT pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.653426 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 05:50:00.660823 kernel: audit: type=1101 audit(1768456200.649:914): pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.660880 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 34440 ssh2: RSA SHA256:/Rgvn6r3r03cZbJrf1jRvFb5295y/jFmBYqShYhusYY Jan 15 05:50:00.651000 audit[5425]: CRED_ACQ pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.673112 systemd-logind[1577]: New session 27 of user core. Jan 15 05:50:00.677130 kernel: audit: type=1103 audit(1768456200.651:915): pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.677276 kernel: audit: type=1006 audit(1768456200.651:916): pid=5425 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 15 05:50:00.677324 kernel: audit: type=1300 audit(1768456200.651:916): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0f0361b0 a2=3 a3=0 items=0 ppid=1 pid=5425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:50:00.651000 audit[5425]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0f0361b0 a2=3 a3=0 items=0 ppid=1 pid=5425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 15 05:50:00.651000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:50:00.690685 kernel: audit: type=1327 audit(1768456200.651:916): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 15 05:50:00.696724 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 15 05:50:00.702000 audit[5425]: USER_START pid=5425 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.705000 audit[5429]: CRED_ACQ pid=5429 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.723772 kernel: audit: type=1105 audit(1768456200.702:917): pid=5425 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.725006 kernel: audit: type=1103 audit(1768456200.705:918): pid=5429 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.869681 sshd[5429]: Connection closed by 10.0.0.1 port 34440 Jan 15 05:50:00.872678 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Jan 15 05:50:00.874000 audit[5425]: USER_END pid=5425 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.881106 systemd[1]: sshd@25-10.0.0.92:22-10.0.0.1:34440.service: Deactivated successfully. Jan 15 05:50:00.885629 systemd[1]: session-27.scope: Deactivated successfully. Jan 15 05:50:00.888281 systemd-logind[1577]: Session 27 logged out. Waiting for processes to exit. Jan 15 05:50:00.874000 audit[5425]: CRED_DISP pid=5425 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.890247 systemd-logind[1577]: Removed session 27. Jan 15 05:50:00.898251 kernel: audit: type=1106 audit(1768456200.874:919): pid=5425 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.898450 kernel: audit: type=1104 audit(1768456200.874:920): pid=5425 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 15 05:50:00.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.92:22-10.0.0.1:34440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 15 05:50:01.148507 kubelet[2767]: E0115 05:50:01.148214 2767 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6988569c94-7xzb7" podUID="6b6565ef-726d-4164-a834-22cf5e5bfe9a"