Jan 14 01:29:26.489634 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:26:24 -00 2026 Jan 14 01:29:26.489685 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:29:26.489695 kernel: BIOS-provided physical RAM map: Jan 14 01:29:26.489704 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 01:29:26.489710 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 14 01:29:26.489716 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 14 01:29:26.489723 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 14 01:29:26.489729 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 14 01:29:26.489735 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 14 01:29:26.489741 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 14 01:29:26.489748 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 14 01:29:26.489756 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 14 01:29:26.489762 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 14 01:29:26.489768 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 14 01:29:26.489776 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 14 01:29:26.489782 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 14 01:29:26.489791 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 14 01:29:26.489797 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 14 01:29:26.489804 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 14 01:29:26.489810 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 14 01:29:26.489817 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 14 01:29:26.489824 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 14 01:29:26.489830 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 14 01:29:26.489887 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 01:29:26.489895 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 14 01:29:26.489901 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 01:29:26.489911 kernel: NX (Execute Disable) protection: active Jan 14 01:29:26.489918 kernel: APIC: Static calls initialized Jan 14 01:29:26.489924 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 14 01:29:26.489931 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 14 01:29:26.489938 kernel: extended physical RAM map: Jan 14 01:29:26.489944 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 01:29:26.489951 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 14 01:29:26.489958 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 14 01:29:26.489964 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 14 01:29:26.489971 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 14 01:29:26.489978 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 14 01:29:26.489986 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 14 01:29:26.489993 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 14 01:29:26.489999 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 14 01:29:26.490009 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 14 01:29:26.490018 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 14 01:29:26.490025 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 14 01:29:26.490032 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 14 01:29:26.490039 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 14 01:29:26.490046 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 14 01:29:26.490054 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 14 01:29:26.490060 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 14 01:29:26.490067 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 14 01:29:26.490075 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 14 01:29:26.490084 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 14 01:29:26.490091 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 14 01:29:26.490098 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 14 01:29:26.490105 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 14 01:29:26.490112 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 14 01:29:26.490119 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 01:29:26.490126 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 14 01:29:26.490133 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 01:29:26.490140 kernel: efi: EFI v2.7 by EDK II Jan 14 01:29:26.490147 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 14 01:29:26.490154 kernel: random: crng init done Jan 14 01:29:26.490163 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 14 01:29:26.490170 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 14 01:29:26.490177 kernel: secureboot: Secure boot disabled Jan 14 01:29:26.490184 kernel: SMBIOS 2.8 present. Jan 14 01:29:26.490191 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 14 01:29:26.490197 kernel: DMI: Memory slots populated: 1/1 Jan 14 01:29:26.490204 kernel: Hypervisor detected: KVM Jan 14 01:29:26.490211 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 14 01:29:26.490218 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 01:29:26.490225 kernel: kvm-clock: using sched offset of 18262389347 cycles Jan 14 01:29:26.490233 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 01:29:26.490242 kernel: tsc: Detected 2445.426 MHz processor Jan 14 01:29:26.490251 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:29:26.490258 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:29:26.490265 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 14 01:29:26.490272 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 01:29:26.490279 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:29:26.490287 kernel: Using GB pages for direct mapping Jan 14 01:29:26.490296 kernel: ACPI: Early table checksum verification disabled Jan 14 01:29:26.490303 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 14 01:29:26.490311 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 14 01:29:26.490318 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:29:26.490325 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:29:26.490332 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 14 01:29:26.490340 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:29:26.490349 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:29:26.490402 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:29:26.490410 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:29:26.490417 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 14 01:29:26.490424 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 14 01:29:26.490432 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 14 01:29:26.490439 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 14 01:29:26.490450 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 14 01:29:26.490457 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 14 01:29:26.490464 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 14 01:29:26.490472 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 14 01:29:26.490479 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 14 01:29:26.490486 kernel: No NUMA configuration found Jan 14 01:29:26.490493 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 14 01:29:26.490501 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 14 01:29:26.490541 kernel: Zone ranges: Jan 14 01:29:26.490549 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:29:26.490556 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 14 01:29:26.490563 kernel: Normal empty Jan 14 01:29:26.490570 kernel: Device empty Jan 14 01:29:26.490577 kernel: Movable zone start for each node Jan 14 01:29:26.490584 kernel: Early memory node ranges Jan 14 01:29:26.490616 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 01:29:26.490624 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 14 01:29:26.490631 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 14 01:29:26.490638 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 14 01:29:26.490645 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 14 01:29:26.490652 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 14 01:29:26.490659 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 14 01:29:26.490666 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 14 01:29:26.490698 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 14 01:29:26.490706 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:29:26.490780 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 01:29:26.490811 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 14 01:29:26.490818 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:29:26.490825 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 14 01:29:26.490833 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 14 01:29:26.490886 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 14 01:29:26.490893 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 14 01:29:26.490901 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 14 01:29:26.490937 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 01:29:26.490944 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 01:29:26.490952 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 01:29:26.490959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 01:29:26.490990 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 01:29:26.490998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:29:26.491005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 01:29:26.491013 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 01:29:26.491021 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:29:26.491028 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 01:29:26.491036 kernel: TSC deadline timer available Jan 14 01:29:26.491067 kernel: CPU topo: Max. logical packages: 1 Jan 14 01:29:26.491074 kernel: CPU topo: Max. logical dies: 1 Jan 14 01:29:26.491081 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:29:26.491089 kernel: CPU topo: Max. threads per core: 1 Jan 14 01:29:26.491097 kernel: CPU topo: Num. cores per package: 4 Jan 14 01:29:26.491104 kernel: CPU topo: Num. threads per package: 4 Jan 14 01:29:26.491111 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 14 01:29:26.491118 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 01:29:26.491149 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 14 01:29:26.491157 kernel: kvm-guest: setup PV sched yield Jan 14 01:29:26.491164 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 14 01:29:26.491172 kernel: Booting paravirtualized kernel on KVM Jan 14 01:29:26.491180 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:29:26.491187 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 14 01:29:26.491195 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 14 01:29:26.491226 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 14 01:29:26.491233 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 14 01:29:26.491241 kernel: kvm-guest: PV spinlocks enabled Jan 14 01:29:26.491248 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 01:29:26.491257 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:29:26.491265 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 01:29:26.491295 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 01:29:26.491303 kernel: Fallback order for Node 0: 0 Jan 14 01:29:26.491311 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 14 01:29:26.491319 kernel: Policy zone: DMA32 Jan 14 01:29:26.491326 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:29:26.491334 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 01:29:26.491341 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 01:29:26.491349 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:29:26.491457 kernel: Dynamic Preempt: voluntary Jan 14 01:29:26.491466 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:29:26.491474 kernel: rcu: RCU event tracing is enabled. Jan 14 01:29:26.491483 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 01:29:26.491490 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:29:26.491498 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:29:26.491506 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:29:26.491541 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:29:26.491549 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 01:29:26.491557 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:29:26.491565 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:29:26.491573 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:29:26.491580 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 14 01:29:26.491588 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:29:26.491680 kernel: Console: colour dummy device 80x25 Jan 14 01:29:26.491689 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:29:26.495614 kernel: ACPI: Core revision 20240827 Jan 14 01:29:26.495646 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 01:29:26.495661 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:29:26.495673 kernel: x2apic enabled Jan 14 01:29:26.495684 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:29:26.495766 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 14 01:29:26.495778 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 14 01:29:26.495790 kernel: kvm-guest: setup PV IPIs Jan 14 01:29:26.495801 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 01:29:26.495813 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 01:29:26.495824 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 14 01:29:26.495906 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 01:29:26.495966 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 14 01:29:26.495978 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 14 01:29:26.495989 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:29:26.496000 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 01:29:26.496014 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:29:26.496027 kernel: Speculative Store Bypass: Vulnerable Jan 14 01:29:26.496039 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 14 01:29:26.496099 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 14 01:29:26.496115 kernel: active return thunk: srso_alias_return_thunk Jan 14 01:29:26.496126 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 14 01:29:26.496137 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 14 01:29:26.496148 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 01:29:26.496159 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:29:26.496170 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:29:26.496228 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:29:26.496241 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:29:26.496253 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 14 01:29:26.496265 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:29:26.496277 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:29:26.496289 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:29:26.496301 kernel: landlock: Up and running. Jan 14 01:29:26.496344 kernel: SELinux: Initializing. Jan 14 01:29:26.496401 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 01:29:26.496417 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 01:29:26.496431 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 14 01:29:26.496443 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 14 01:29:26.496454 kernel: signal: max sigframe size: 1776 Jan 14 01:29:26.496464 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:29:26.496526 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:29:26.496538 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 01:29:26.496549 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 01:29:26.496560 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:29:26.496570 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:29:26.496581 kernel: .... node #0, CPUs: #1 #2 #3 Jan 14 01:29:26.496596 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 01:29:26.496653 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 14 01:29:26.496666 kernel: Memory: 2439052K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 120812K reserved, 0K cma-reserved) Jan 14 01:29:26.496681 kernel: devtmpfs: initialized Jan 14 01:29:26.496693 kernel: x86/mm: Memory block size: 128MB Jan 14 01:29:26.496704 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 14 01:29:26.496715 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 14 01:29:26.496726 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 14 01:29:26.496786 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 14 01:29:26.496798 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 14 01:29:26.496809 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 14 01:29:26.496820 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:29:26.496830 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 01:29:26.496920 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:29:26.496935 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:29:26.496993 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:29:26.497004 kernel: audit: type=2000 audit(1768354155.718:1): state=initialized audit_enabled=0 res=1 Jan 14 01:29:26.497017 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:29:26.497032 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:29:26.497043 kernel: cpuidle: using governor menu Jan 14 01:29:26.497054 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:29:26.497065 kernel: dca service started, version 1.12.1 Jan 14 01:29:26.497134 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 14 01:29:26.497145 kernel: PCI: Using configuration type 1 for base access Jan 14 01:29:26.497156 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:29:26.497233 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 01:29:26.497248 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 01:29:26.497260 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:29:26.497273 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:29:26.497319 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:29:26.497333 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:29:26.497344 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:29:26.497404 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 01:29:26.497417 kernel: ACPI: Interpreter enabled Jan 14 01:29:26.497432 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 01:29:26.497445 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:29:26.497456 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:29:26.497517 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 01:29:26.497531 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 01:29:26.497542 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 01:29:26.498006 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 01:29:26.498296 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 14 01:29:26.498759 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 14 01:29:26.498777 kernel: PCI host bridge to bus 0000:00 Jan 14 01:29:26.499181 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 01:29:26.499453 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 01:29:26.499669 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 01:29:26.499962 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 14 01:29:26.500231 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 14 01:29:26.500507 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 14 01:29:26.500724 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 01:29:26.501059 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 01:29:26.501307 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 01:29:26.502636 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 14 01:29:26.504523 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 14 01:29:26.506987 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 14 01:29:26.507617 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 01:29:26.507950 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 11718 usecs Jan 14 01:29:26.508205 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 01:29:26.508553 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 14 01:29:26.508789 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 14 01:29:26.509100 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 14 01:29:26.509345 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 01:29:26.509637 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 14 01:29:26.509955 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 14 01:29:26.510249 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 14 01:29:26.510547 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 01:29:26.510784 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 14 01:29:26.511102 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 14 01:29:26.511338 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 14 01:29:26.511617 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 14 01:29:26.512008 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 01:29:26.512241 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 01:29:26.512516 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 10742 usecs Jan 14 01:29:26.512806 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 01:29:26.513139 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 14 01:29:26.513483 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 14 01:29:26.513728 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 01:29:26.514041 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 14 01:29:26.514058 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 01:29:26.514069 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 01:29:26.514085 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 01:29:26.514149 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 01:29:26.514160 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 01:29:26.514174 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 01:29:26.514187 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 01:29:26.514198 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 01:29:26.514209 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 01:29:26.514219 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 01:29:26.514280 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 01:29:26.514293 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 01:29:26.514305 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 01:29:26.514318 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 01:29:26.514330 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 01:29:26.514343 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 01:29:26.514389 kernel: iommu: Default domain type: Translated Jan 14 01:29:26.514434 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:29:26.514449 kernel: efivars: Registered efivars operations Jan 14 01:29:26.514461 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:29:26.514472 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 01:29:26.514483 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 14 01:29:26.514493 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 14 01:29:26.514504 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 14 01:29:26.514514 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 14 01:29:26.514575 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 14 01:29:26.514586 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 14 01:29:26.514597 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 14 01:29:26.514608 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 14 01:29:26.514924 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 01:29:26.515228 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 01:29:26.515577 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 01:29:26.515596 kernel: vgaarb: loaded Jan 14 01:29:26.515611 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 01:29:26.515622 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 01:29:26.515633 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 01:29:26.515644 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:29:26.515655 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:29:26.515721 kernel: pnp: PnP ACPI init Jan 14 01:29:26.516051 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 14 01:29:26.516073 kernel: pnp: PnP ACPI: found 6 devices Jan 14 01:29:26.516084 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:29:26.516095 kernel: NET: Registered PF_INET protocol family Jan 14 01:29:26.516106 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 01:29:26.516118 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 01:29:26.516401 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:29:26.516461 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 01:29:26.516473 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 01:29:26.516483 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 01:29:26.516495 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 01:29:26.516506 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 01:29:26.516517 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:29:26.516570 kernel: NET: Registered PF_XDP protocol family Jan 14 01:29:26.516808 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 14 01:29:26.517196 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 14 01:29:26.517495 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 01:29:26.517771 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 01:29:26.518067 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 01:29:26.518342 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 14 01:29:26.518636 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 14 01:29:26.518969 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 14 01:29:26.518990 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:29:26.519001 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 01:29:26.519012 kernel: Initialise system trusted keyrings Jan 14 01:29:26.519023 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 01:29:26.519092 kernel: Key type asymmetric registered Jan 14 01:29:26.519104 kernel: Asymmetric key parser 'x509' registered Jan 14 01:29:26.519115 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:29:26.519125 kernel: io scheduler mq-deadline registered Jan 14 01:29:26.519136 kernel: io scheduler kyber registered Jan 14 01:29:26.519149 kernel: io scheduler bfq registered Jan 14 01:29:26.519162 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:29:26.519260 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 01:29:26.519305 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 01:29:26.519318 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 01:29:26.519331 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:29:26.519344 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:29:26.519411 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 01:29:26.519419 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 01:29:26.519427 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 01:29:26.519670 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 14 01:29:26.519689 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 01:29:26.519991 kernel: rtc_cmos 00:04: registered as rtc0 Jan 14 01:29:26.520312 kernel: rtc_cmos 00:04: setting system clock to 2026-01-14T01:29:23 UTC (1768354163) Jan 14 01:29:26.520630 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 14 01:29:26.520653 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 14 01:29:26.520665 kernel: efifb: probing for efifb Jan 14 01:29:26.520749 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 14 01:29:26.520763 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 14 01:29:26.520774 kernel: efifb: scrolling: redraw Jan 14 01:29:26.520910 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 01:29:26.520925 kernel: Console: switching to colour frame buffer device 160x50 Jan 14 01:29:26.520936 kernel: fb0: EFI VGA frame buffer device Jan 14 01:29:26.520947 kernel: pstore: Using crash dump compression: deflate Jan 14 01:29:26.520958 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 01:29:26.520969 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:29:26.520980 kernel: Segment Routing with IPv6 Jan 14 01:29:26.521038 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:29:26.521051 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:29:26.521062 kernel: Key type dns_resolver registered Jan 14 01:29:26.521073 kernel: IPI shorthand broadcast: enabled Jan 14 01:29:26.521087 kernel: sched_clock: Marking stable (5632033279, 3269159725)->(10186332012, -1285139008) Jan 14 01:29:26.521100 kernel: registered taskstats version 1 Jan 14 01:29:26.521112 kernel: Loading compiled-in X.509 certificates Jan 14 01:29:26.521172 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e43fcdb17feb86efe6ca4b76910b93467fb95f4f' Jan 14 01:29:26.521186 kernel: Demotion targets for Node 0: null Jan 14 01:29:26.521198 kernel: Key type .fscrypt registered Jan 14 01:29:26.521209 kernel: Key type fscrypt-provisioning registered Jan 14 01:29:26.521220 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 01:29:26.521231 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:29:26.521242 kernel: ima: No architecture policies found Jan 14 01:29:26.521297 kernel: clk: Disabling unused clocks Jan 14 01:29:26.521310 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:29:26.521323 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:29:26.521336 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 01:29:26.521348 kernel: Run /init as init process Jan 14 01:29:26.521394 kernel: with arguments: Jan 14 01:29:26.521403 kernel: /init Jan 14 01:29:26.521442 kernel: with environment: Jan 14 01:29:26.521452 kernel: HOME=/ Jan 14 01:29:26.521466 kernel: TERM=linux Jan 14 01:29:26.521479 kernel: SCSI subsystem initialized Jan 14 01:29:26.521490 kernel: libata version 3.00 loaded. Jan 14 01:29:26.521779 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 01:29:26.521798 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 01:29:26.522133 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 01:29:26.522483 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 01:29:26.522717 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 01:29:26.524243 kernel: scsi host0: ahci Jan 14 01:29:26.524547 kernel: scsi host1: ahci Jan 14 01:29:26.524799 kernel: scsi host2: ahci Jan 14 01:29:26.525168 kernel: scsi host3: ahci Jan 14 01:29:26.525492 kernel: scsi host4: ahci Jan 14 01:29:26.525940 kernel: scsi host5: ahci Jan 14 01:29:26.525960 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 14 01:29:26.525972 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 14 01:29:26.525984 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 14 01:29:26.526046 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 14 01:29:26.526058 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 14 01:29:26.526069 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 14 01:29:26.526080 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 01:29:26.526096 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 01:29:26.526108 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 01:29:26.526119 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 14 01:29:26.526183 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 01:29:26.526197 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 01:29:26.526208 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 14 01:29:26.526219 kernel: ata3.00: applying bridge limits Jan 14 01:29:26.526230 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 01:29:26.526241 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 01:29:26.526251 kernel: ata3.00: configured for UDMA/100 Jan 14 01:29:26.526606 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 01:29:26.526940 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 01:29:26.527175 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 14 01:29:26.527194 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 01:29:26.527207 kernel: GPT:16515071 != 27000831 Jan 14 01:29:26.527509 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 14 01:29:26.527578 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 01:29:26.527590 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 01:29:26.527602 kernel: GPT:16515071 != 27000831 Jan 14 01:29:26.527614 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 01:29:26.527629 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 01:29:26.527967 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 14 01:29:26.527989 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:29:26.528052 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:29:26.528064 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:29:26.528076 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:29:26.528090 kernel: raid6: avx2x4 gen() 33584 MB/s Jan 14 01:29:26.528103 kernel: raid6: avx2x2 gen() 33386 MB/s Jan 14 01:29:26.528114 kernel: raid6: avx2x1 gen() 21703 MB/s Jan 14 01:29:26.528125 kernel: raid6: using algorithm avx2x4 gen() 33584 MB/s Jan 14 01:29:26.528192 kernel: raid6: .... xor() 4653 MB/s, rmw enabled Jan 14 01:29:26.528206 kernel: raid6: using avx2x2 recovery algorithm Jan 14 01:29:26.528217 kernel: xor: automatically using best checksumming function avx Jan 14 01:29:26.528229 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:29:26.528240 kernel: BTRFS: device fsid cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (182) Jan 14 01:29:26.528252 kernel: BTRFS info (device dm-0): first mount of filesystem cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 Jan 14 01:29:26.528263 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:29:26.528319 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:29:26.528332 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:29:26.528344 kernel: loop: module loaded Jan 14 01:29:26.528405 kernel: loop0: detected capacity change from 0 to 100544 Jan 14 01:29:26.528464 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:29:26.528478 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:29:26.528493 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:29:26.528554 systemd[1]: Detected virtualization kvm. Jan 14 01:29:26.528566 systemd[1]: Detected architecture x86-64. Jan 14 01:29:26.528578 systemd[1]: Running in initrd. Jan 14 01:29:26.528589 systemd[1]: No hostname configured, using default hostname. Jan 14 01:29:26.528604 systemd[1]: Hostname set to . Jan 14 01:29:26.528660 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:29:26.528673 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:29:26.528685 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:29:26.528700 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:29:26.528713 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:29:26.528727 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:29:26.528739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:29:26.528801 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:29:26.528815 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:29:26.528827 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:29:26.528913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:29:26.528927 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:29:26.528939 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:29:26.529081 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:29:26.529097 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:29:26.529111 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:29:26.529123 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:29:26.529135 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:29:26.529147 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:29:26.529159 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:29:26.529225 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:29:26.529238 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:29:26.529250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:29:26.529262 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:29:26.529278 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:29:26.529291 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:29:26.529348 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:29:26.529406 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:29:26.529419 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:29:26.529437 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:29:26.529451 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:29:26.529462 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:29:26.529474 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:29:26.529538 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:29:26.529551 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:29:26.529563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:29:26.529575 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:29:26.529670 systemd-journald[319]: Collecting audit messages is enabled. Jan 14 01:29:26.529704 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:29:26.529719 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:29:26.529776 kernel: Bridge firewalling registered Jan 14 01:29:26.529790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:29:26.529804 kernel: audit: type=1130 audit(1768354166.528:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.529819 systemd-journald[319]: Journal started Jan 14 01:29:26.529888 systemd-journald[319]: Runtime Journal (/run/log/journal/0cb2d66674e644bc8c0caa4f152a1efc) is 6M, max 48M, 42M free. Jan 14 01:29:26.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.524026 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 14 01:29:26.684961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:29:26.685105 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:29:26.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.693094 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:29:26.698922 kernel: audit: type=1130 audit(1768354166.684:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.717160 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:29:26.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.734899 kernel: audit: type=1130 audit(1768354166.721:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.736218 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:29:26.759340 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:29:26.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.768915 kernel: audit: type=1130 audit(1768354166.758:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.769070 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:29:26.796817 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:29:26.797601 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:29:26.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.824051 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:29:26.840604 kernel: audit: type=1130 audit(1768354166.816:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.840630 kernel: audit: type=1130 audit(1768354166.823:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.840794 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:29:26.859168 kernel: audit: type=1130 audit(1768354166.847:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.859339 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:29:26.877690 kernel: audit: type=1130 audit(1768354166.859:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:26.880819 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:29:26.892354 kernel: audit: type=1334 audit(1768354166.881:10): prog-id=6 op=LOAD Jan 14 01:29:26.881000 audit: BPF prog-id=6 op=LOAD Jan 14 01:29:26.882520 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:29:26.935406 dracut-cmdline[356]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:29:27.470167 systemd-resolved[357]: Positive Trust Anchors: Jan 14 01:29:27.470212 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:29:27.470220 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:29:27.470305 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:29:27.551526 systemd-resolved[357]: Defaulting to hostname 'linux'. Jan 14 01:29:27.553823 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:29:27.574739 kernel: audit: type=1130 audit(1768354167.554:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:27.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:27.555594 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:29:27.637044 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:29:27.659991 kernel: iscsi: registered transport (tcp) Jan 14 01:29:27.697305 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:29:27.697467 kernel: QLogic iSCSI HBA Driver Jan 14 01:29:27.744369 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:29:27.801426 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:29:27.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:27.815528 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:29:28.407480 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:29:28.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:28.410132 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:29:28.422700 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:29:28.493006 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:29:28.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:28.494000 audit: BPF prog-id=7 op=LOAD Jan 14 01:29:28.494000 audit: BPF prog-id=8 op=LOAD Jan 14 01:29:28.495999 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:29:28.550650 systemd-udevd[591]: Using default interface naming scheme 'v257'. Jan 14 01:29:28.576815 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:29:28.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:28.589133 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:29:28.645825 dracut-pre-trigger[646]: rd.md=0: removing MD RAID activation Jan 14 01:29:28.689543 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:29:28.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:28.707000 audit: BPF prog-id=9 op=LOAD Jan 14 01:29:28.712066 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:29:28.740987 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:29:28.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:28.765664 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:29:30.921327 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1711881837 wd_nsec: 1711881048 Jan 14 01:29:30.968293 systemd-networkd[725]: lo: Link UP Jan 14 01:29:30.968321 systemd-networkd[725]: lo: Gained carrier Jan 14 01:29:30.971830 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:29:30.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:30.978919 systemd[1]: Reached target network.target - Network. Jan 14 01:29:31.015595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:29:31.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:31.026659 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:29:31.169923 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 01:29:32.134946 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:29:32.157921 kernel: AES CTR mode by8 optimization enabled Jan 14 01:29:32.179364 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 01:29:32.197300 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 01:29:32.220652 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 14 01:29:32.236146 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:29:32.240393 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:29:32.240401 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:29:32.243757 systemd-networkd[725]: eth0: Link UP Jan 14 01:29:32.244327 systemd-networkd[725]: eth0: Gained carrier Jan 14 01:29:32.244343 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:29:32.266209 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:29:32.310500 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 14 01:29:32.310542 kernel: audit: type=1131 audit(1768354172.292:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:32.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:32.288208 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:29:32.288484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:29:32.295014 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:29:32.319494 systemd-networkd[725]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 01:29:32.322105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:29:32.349199 disk-uuid[837]: Primary Header is updated. Jan 14 01:29:32.349199 disk-uuid[837]: Secondary Entries is updated. Jan 14 01:29:32.349199 disk-uuid[837]: Secondary Header is updated. Jan 14 01:29:32.409617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:29:32.431530 kernel: audit: type=1130 audit(1768354172.413:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:32.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:32.436296 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:29:32.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:32.457924 kernel: audit: type=1130 audit(1768354172.439:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:32.460372 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:29:32.476217 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:29:32.486282 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:29:32.495516 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:29:32.555240 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:29:32.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:32.582356 kernel: audit: type=1130 audit(1768354172.554:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:33.428620 systemd-networkd[725]: eth0: Gained IPv6LL Jan 14 01:29:33.445391 disk-uuid[839]: Warning: The kernel is still using the old partition table. Jan 14 01:29:33.445391 disk-uuid[839]: The new table will be used at the next reboot or after you Jan 14 01:29:33.445391 disk-uuid[839]: run partprobe(8) or kpartx(8) Jan 14 01:29:33.445391 disk-uuid[839]: The operation has completed successfully. Jan 14 01:29:33.490278 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:29:33.490549 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:29:33.527944 kernel: audit: type=1130 audit(1768354173.502:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:33.527997 kernel: audit: type=1131 audit(1768354173.502:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:33.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:33.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:33.505732 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:29:33.571107 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (863) Jan 14 01:29:33.582037 kernel: BTRFS info (device vda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:29:33.582071 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:29:33.591310 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:29:33.591339 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:29:33.605982 kernel: BTRFS info (device vda6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:29:33.608690 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:29:33.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:33.617662 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:29:33.635294 kernel: audit: type=1130 audit(1768354173.615:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:35.403961 ignition[882]: Ignition 2.24.0 Jan 14 01:29:35.404012 ignition[882]: Stage: fetch-offline Jan 14 01:29:35.404267 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:29:35.404282 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:29:35.432261 ignition[882]: parsed url from cmdline: "" Jan 14 01:29:35.432301 ignition[882]: no config URL provided Jan 14 01:29:35.450157 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:29:35.461686 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:29:35.487311 ignition[882]: op(1): [started] loading QEMU firmware config module Jan 14 01:29:35.501372 ignition[882]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 14 01:29:35.633059 ignition[882]: op(1): [finished] loading QEMU firmware config module Jan 14 01:29:35.907562 ignition[882]: parsing config with SHA512: 30effd767aa16e7822d34dbcf609402694e30f1324ede204f00c1fa7b20c1d57f79c50cdf80c8afcfcc82ac1a63978754f5588303a1026c10a9864217a42d314 Jan 14 01:29:35.979678 unknown[882]: fetched base config from "system" Jan 14 01:29:35.979716 unknown[882]: fetched user config from "qemu" Jan 14 01:29:35.980609 ignition[882]: fetch-offline: fetch-offline passed Jan 14 01:29:35.980773 ignition[882]: Ignition finished successfully Jan 14 01:29:36.014994 kernel: audit: type=1130 audit(1768354175.998:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:35.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:35.991668 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:29:36.013936 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 14 01:29:36.018682 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:29:36.195077 ignition[892]: Ignition 2.24.0 Jan 14 01:29:36.195112 ignition[892]: Stage: kargs Jan 14 01:29:36.195576 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:29:36.195593 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:29:36.213436 ignition[892]: kargs: kargs passed Jan 14 01:29:36.216272 ignition[892]: Ignition finished successfully Jan 14 01:29:36.224073 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:29:36.246712 kernel: audit: type=1130 audit(1768354176.228:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:36.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:36.233967 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:29:36.405989 ignition[900]: Ignition 2.24.0 Jan 14 01:29:36.406027 ignition[900]: Stage: disks Jan 14 01:29:36.406211 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:29:36.406221 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:29:36.423378 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:29:36.448626 kernel: audit: type=1130 audit(1768354176.430:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:36.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:36.409341 ignition[900]: disks: disks passed Jan 14 01:29:36.449282 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:29:36.409533 ignition[900]: Ignition finished successfully Jan 14 01:29:36.457158 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:29:36.475045 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:29:36.490798 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:29:36.499627 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:29:36.508905 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:29:36.636008 systemd-fsck[910]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 14 01:29:36.644112 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:29:36.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:36.660151 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:29:37.157002 kernel: EXT4-fs (vda9): mounted filesystem 9c98b0a3-27fc-41c4-a169-349b38bd9ceb r/w with ordered data mode. Quota mode: none. Jan 14 01:29:37.157753 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:29:37.161925 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:29:37.187805 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:29:37.198237 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:29:37.203523 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 01:29:37.203574 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:29:37.203606 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:29:37.257977 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:29:37.271679 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:29:37.292951 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (919) Jan 14 01:29:37.304268 kernel: BTRFS info (device vda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:29:37.304527 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:29:37.315176 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:29:37.315345 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:29:37.317691 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:29:37.611812 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:29:37.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:37.621760 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:29:37.639300 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:29:37.639323 kernel: audit: type=1130 audit(1768354177.619:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:37.639941 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:29:37.677215 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:29:37.685510 kernel: BTRFS info (device vda6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:29:37.723333 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:29:37.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:37.742297 kernel: audit: type=1130 audit(1768354177.732:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:37.807642 ignition[1016]: INFO : Ignition 2.24.0 Jan 14 01:29:37.807642 ignition[1016]: INFO : Stage: mount Jan 14 01:29:37.815146 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:29:37.815146 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:29:37.815146 ignition[1016]: INFO : mount: mount passed Jan 14 01:29:37.815146 ignition[1016]: INFO : Ignition finished successfully Jan 14 01:29:37.848322 kernel: audit: type=1130 audit(1768354177.818:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:37.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:37.812128 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:29:37.822162 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:29:38.163211 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:29:38.268249 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1030) Jan 14 01:29:38.291047 kernel: BTRFS info (device vda6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:29:38.291138 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:29:38.310590 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:29:38.310785 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:29:38.314286 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:29:38.448832 ignition[1047]: INFO : Ignition 2.24.0 Jan 14 01:29:38.453114 ignition[1047]: INFO : Stage: files Jan 14 01:29:38.456721 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:29:38.456721 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:29:38.471823 ignition[1047]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:29:38.493588 ignition[1047]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:29:38.493588 ignition[1047]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:29:38.508783 ignition[1047]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:29:38.508783 ignition[1047]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:29:38.508783 ignition[1047]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:29:38.508783 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:29:38.508783 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 01:29:38.503270 unknown[1047]: wrote ssh authorized keys file for user: core Jan 14 01:29:38.618406 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:29:39.410431 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:29:39.423924 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:29:39.431809 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:29:39.431809 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:29:39.449766 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:29:39.449766 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:29:39.449766 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:29:39.449766 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:29:39.449766 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:29:39.449766 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:29:39.449766 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:29:39.449766 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:29:39.449766 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:29:39.449766 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:29:39.449766 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 14 01:29:39.869777 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:29:40.293233 kernel: hrtimer: interrupt took 8267552 ns Jan 14 01:29:42.682495 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:29:42.682495 ignition[1047]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:29:42.695184 ignition[1047]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:29:42.731745 ignition[1047]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:29:42.731745 ignition[1047]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:29:42.731745 ignition[1047]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 01:29:42.731745 ignition[1047]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 01:29:42.754566 ignition[1047]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 01:29:42.754566 ignition[1047]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 01:29:42.754566 ignition[1047]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 14 01:29:42.809381 ignition[1047]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 01:29:42.823286 ignition[1047]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 01:29:42.828936 ignition[1047]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 14 01:29:42.828936 ignition[1047]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:29:42.828936 ignition[1047]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:29:42.828936 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:29:42.828936 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:29:42.828936 ignition[1047]: INFO : files: files passed Jan 14 01:29:42.828936 ignition[1047]: INFO : Ignition finished successfully Jan 14 01:29:42.866431 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:29:42.886918 kernel: audit: type=1130 audit(1768354182.873:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:42.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:42.875576 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:29:42.916348 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:29:42.934770 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:29:42.952685 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:29:42.979957 kernel: audit: type=1130 audit(1768354182.952:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:42.980085 kernel: audit: type=1131 audit(1768354182.952:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:42.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:42.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:42.986376 initrd-setup-root-after-ignition[1078]: grep: /sysroot/oem/oem-release: No such file or directory Jan 14 01:29:43.004211 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:29:43.004211 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:29:43.014654 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:29:43.028400 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:29:43.047808 kernel: audit: type=1130 audit(1768354183.034:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.048326 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:29:43.059146 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:29:43.174330 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:29:43.174548 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:29:43.201110 kernel: audit: type=1130 audit(1768354183.177:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.201140 kernel: audit: type=1131 audit(1768354183.177:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.178235 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:29:43.204411 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:29:43.214770 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:29:43.216235 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:29:43.270216 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:29:43.290141 kernel: audit: type=1130 audit(1768354183.269:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.274212 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:29:43.321296 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:29:43.321631 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:29:43.329600 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:29:43.333251 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:29:43.346078 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:29:43.346271 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:29:43.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.373118 kernel: audit: type=1131 audit(1768354183.352:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.377014 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:29:43.381973 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:29:43.392045 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:29:43.406244 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:29:43.417581 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:29:43.431803 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:29:43.443195 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:29:43.457580 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:29:43.472291 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:29:43.485982 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:29:43.491759 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:29:43.505389 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:29:43.506000 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:29:43.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.525880 kernel: audit: type=1131 audit(1768354183.512:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.525767 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:29:43.529662 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:29:43.533193 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:29:43.533409 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:29:43.562292 kernel: audit: type=1131 audit(1768354183.546:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.540631 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:29:43.540787 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:29:43.562622 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:29:43.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.563227 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:29:43.576461 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:29:43.586336 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:29:43.591160 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:29:43.592782 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:29:43.601253 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:29:43.610210 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:29:43.610333 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:29:43.613349 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:29:43.613473 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:29:43.619362 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:29:43.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.619463 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:29:43.625647 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:29:43.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.625972 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:29:43.636287 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:29:43.636460 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:29:43.640940 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:29:43.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.649136 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:29:43.649366 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:29:43.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.654304 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:29:43.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.665572 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:29:43.665804 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:29:43.666962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:29:43.667132 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:29:43.671987 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:29:43.672128 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:29:43.678729 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:29:43.725244 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:29:43.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:43.875450 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:29:44.188305 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:29:44.188602 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:29:44.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.235471 ignition[1104]: INFO : Ignition 2.24.0 Jan 14 01:29:44.235471 ignition[1104]: INFO : Stage: umount Jan 14 01:29:44.241424 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:29:44.241424 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:29:44.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.254956 ignition[1104]: INFO : umount: umount passed Jan 14 01:29:44.254956 ignition[1104]: INFO : Ignition finished successfully Jan 14 01:29:44.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.249658 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:29:44.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.249932 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:29:44.255979 systemd[1]: Stopped target network.target - Network. Jan 14 01:29:44.261699 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:29:44.261795 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:29:44.271411 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:29:44.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.271484 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:29:44.331000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:29:44.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.280175 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:29:44.280254 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:29:44.343000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:29:44.284240 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:29:44.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.284312 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:29:44.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.288409 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:29:44.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.288487 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:29:44.296392 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:29:44.300353 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:29:44.317819 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:29:44.318098 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:29:44.329722 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:29:44.329967 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:29:44.339360 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:29:44.343691 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:29:44.343751 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:29:44.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.352475 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:29:44.361304 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:29:44.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.361385 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:29:44.368678 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:29:44.368762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:29:44.372422 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:29:44.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.372491 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:29:44.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.389031 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:29:44.419488 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:29:44.419768 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:29:44.423700 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:29:44.423768 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:29:44.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.429895 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:29:44.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.429944 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:29:44.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.440916 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:29:44.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.440992 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:29:44.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:44.451363 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:29:44.451431 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:29:44.466447 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:29:44.466598 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:29:44.486389 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:29:44.489131 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:29:44.489191 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:29:44.504567 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:29:44.504642 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:29:44.512990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:29:44.513049 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:29:44.521810 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:29:44.522101 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:29:44.528474 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:29:44.528750 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:29:44.541441 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:29:44.546340 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:29:44.591090 systemd[1]: Switching root. Jan 14 01:29:44.634151 systemd-journald[319]: Journal stopped Jan 14 01:29:46.671692 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Jan 14 01:29:46.671917 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:29:46.672009 kernel: SELinux: policy capability open_perms=1 Jan 14 01:29:46.672057 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:29:46.672070 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:29:46.672086 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:29:46.672097 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:29:46.672108 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:29:46.672119 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:29:46.672166 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:29:46.672186 systemd[1]: Successfully loaded SELinux policy in 101.147ms. Jan 14 01:29:46.672259 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.342ms. Jan 14 01:29:46.672279 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:29:46.672297 systemd[1]: Detected virtualization kvm. Jan 14 01:29:46.672318 systemd[1]: Detected architecture x86-64. Jan 14 01:29:46.672336 systemd[1]: Detected first boot. Jan 14 01:29:46.672406 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:29:46.672429 zram_generator::config[1148]: No configuration found. Jan 14 01:29:46.672448 kernel: Guest personality initialized and is inactive Jan 14 01:29:46.672464 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 01:29:46.672524 kernel: Initialized host personality Jan 14 01:29:46.672591 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:29:46.672609 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:29:46.672680 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:29:46.672699 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:29:46.672716 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:29:46.672740 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:29:46.672759 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:29:46.672777 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:29:46.672793 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:29:46.674064 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:29:46.674082 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:29:46.674096 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:29:46.674107 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:29:46.674119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:29:46.674177 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:29:46.674334 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:29:46.674356 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:29:46.674378 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:29:46.674397 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:29:46.674467 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:29:46.674491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:29:46.674509 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:29:46.674527 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:29:46.674614 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:29:46.674635 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:29:46.674705 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:29:46.674727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:29:46.674744 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:29:46.674761 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:29:46.674778 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:29:46.674795 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:29:46.674813 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:29:46.674922 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:29:46.674941 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:29:46.674953 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:29:46.674966 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:29:46.674984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:29:46.675005 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:29:46.675027 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:29:46.675103 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:29:46.675124 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:29:46.675143 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:29:46.675161 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:29:46.675181 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:29:46.675199 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:29:46.675217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:29:46.675235 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:29:46.675300 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:29:46.675321 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:29:46.675341 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:29:46.675359 systemd[1]: Reached target machines.target - Containers. Jan 14 01:29:46.675377 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:29:46.675396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:29:46.675458 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:29:46.675478 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:29:46.675496 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:29:46.675514 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:29:46.675579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:29:46.675599 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:29:46.675646 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:29:46.675701 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:29:46.675721 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:29:46.675741 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:29:46.675761 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:29:46.675782 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:29:46.675920 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:29:46.675947 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:29:46.675969 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:29:46.675988 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:29:46.676008 kernel: ACPI: bus type drm_connector registered Jan 14 01:29:46.676027 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:29:46.676101 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:29:46.676116 kernel: fuse: init (API version 7.41) Jan 14 01:29:46.676128 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:29:46.676164 systemd-journald[1234]: Collecting audit messages is enabled. Jan 14 01:29:46.676222 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:29:46.676286 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:29:46.676301 systemd-journald[1234]: Journal started Jan 14 01:29:46.676322 systemd-journald[1234]: Runtime Journal (/run/log/journal/0cb2d66674e644bc8c0caa4f152a1efc) is 6M, max 48M, 42M free. Jan 14 01:29:46.683015 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:29:46.279000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 01:29:46.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.584000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:29:46.584000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:29:46.586000 audit: BPF prog-id=15 op=LOAD Jan 14 01:29:46.586000 audit: BPF prog-id=16 op=LOAD Jan 14 01:29:46.586000 audit: BPF prog-id=17 op=LOAD Jan 14 01:29:46.658000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:29:46.658000 audit[1234]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffecfe1e2e0 a2=4000 a3=0 items=0 ppid=1 pid=1234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:29:46.658000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:29:45.966830 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:29:45.995430 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 01:29:45.997001 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:29:45.997750 systemd[1]: systemd-journald.service: Consumed 1.153s CPU time. Jan 14 01:29:46.689963 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:29:46.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.696417 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:29:46.701722 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:29:46.707611 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:29:46.711498 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:29:46.715335 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:29:46.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.721104 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:29:46.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.726282 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:29:46.726684 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:29:46.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.732942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:29:46.733203 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:29:46.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.738338 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:29:46.738769 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:29:46.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.744219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:29:46.744600 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:29:46.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.750758 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:29:46.751109 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:29:46.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.757033 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:29:46.757383 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:29:46.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.763052 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:29:46.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.769133 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:29:46.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.776983 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:29:46.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.782778 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:29:46.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.805424 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:29:46.810598 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:29:46.817054 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:29:46.823186 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:29:46.826766 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:29:46.826825 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:29:46.831474 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:29:46.831825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:29:46.832029 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:29:46.842519 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:29:46.848188 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:29:46.852051 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:29:46.853332 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:29:46.856894 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:29:46.861075 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:29:46.863687 systemd-journald[1234]: Time spent on flushing to /var/log/journal/0cb2d66674e644bc8c0caa4f152a1efc is 23.510ms for 1191 entries. Jan 14 01:29:46.863687 systemd-journald[1234]: System Journal (/var/log/journal/0cb2d66674e644bc8c0caa4f152a1efc) is 8M, max 163.5M, 155.5M free. Jan 14 01:29:46.905664 systemd-journald[1234]: Received client request to flush runtime journal. Jan 14 01:29:46.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.872022 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:29:46.892473 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:29:46.897620 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:29:46.903091 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:29:46.907789 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:29:46.914800 kernel: loop1: detected capacity change from 0 to 111560 Jan 14 01:29:46.914980 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:29:46.920711 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:29:46.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.928280 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:29:46.936060 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:29:46.963113 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:29:46.968940 kernel: loop2: detected capacity change from 0 to 50784 Jan 14 01:29:46.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.986398 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:29:46.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:46.993000 audit: BPF prog-id=18 op=LOAD Jan 14 01:29:46.993000 audit: BPF prog-id=19 op=LOAD Jan 14 01:29:46.993000 audit: BPF prog-id=20 op=LOAD Jan 14 01:29:46.995670 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:29:47.000000 audit: BPF prog-id=21 op=LOAD Jan 14 01:29:47.002013 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:29:47.007106 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:29:47.012030 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:29:47.012992 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:29:47.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:47.031000 audit: BPF prog-id=22 op=LOAD Jan 14 01:29:47.031000 audit: BPF prog-id=23 op=LOAD Jan 14 01:29:47.031000 audit: BPF prog-id=24 op=LOAD Jan 14 01:29:47.034944 kernel: loop3: detected capacity change from 0 to 229808 Jan 14 01:29:47.035105 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:29:47.041000 audit: BPF prog-id=25 op=LOAD Jan 14 01:29:47.042000 audit: BPF prog-id=26 op=LOAD Jan 14 01:29:47.043000 audit: BPF prog-id=27 op=LOAD Jan 14 01:29:47.045305 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:29:47.919888 kernel: loop4: detected capacity change from 0 to 111560 Jan 14 01:29:47.929625 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 14 01:29:47.929643 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 14 01:29:47.942071 kernel: loop5: detected capacity change from 0 to 50784 Jan 14 01:29:47.943619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:29:47.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:47.952297 kernel: kauditd_printk_skb: 95 callbacks suppressed Jan 14 01:29:47.952337 kernel: audit: type=1130 audit(1768354187.949:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:47.997933 kernel: loop6: detected capacity change from 0 to 229808 Jan 14 01:29:48.010918 (sd-merge)[1295]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 14 01:29:48.014055 systemd-nsresourced[1290]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:29:48.017967 (sd-merge)[1295]: Merged extensions into '/usr'. Jan 14 01:29:48.019959 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:29:48.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:48.033899 kernel: audit: type=1130 audit(1768354188.024:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:48.025260 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:29:48.051327 kernel: audit: type=1130 audit(1768354188.038:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:48.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:48.051929 systemd[1]: Reload requested from client PID 1268 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:29:48.051948 systemd[1]: Reloading... Jan 14 01:29:48.217968 zram_generator::config[1344]: No configuration found. Jan 14 01:29:48.231370 systemd-oomd[1285]: No swap; memory pressure usage will be degraded Jan 14 01:29:48.241308 systemd-resolved[1286]: Positive Trust Anchors: Jan 14 01:29:48.241932 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:29:48.241982 systemd-resolved[1286]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:29:48.242034 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:29:48.248309 systemd-resolved[1286]: Defaulting to hostname 'linux'. Jan 14 01:29:48.451281 systemd[1]: Reloading finished in 398 ms. Jan 14 01:29:48.488294 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:29:48.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:48.493197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:29:48.501905 kernel: audit: type=1130 audit(1768354188.491:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:48.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:48.507323 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:29:48.516929 kernel: audit: type=1130 audit(1768354188.505:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:48.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:48.528244 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:29:48.536660 kernel: audit: type=1130 audit(1768354188.520:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:49.079122 systemd[1]: Starting ensure-sysext.service... Jan 14 01:29:49.088065 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:29:49.104000 audit: BPF prog-id=28 op=LOAD Jan 14 01:29:49.116028 kernel: audit: type=1334 audit(1768354189.104:146): prog-id=28 op=LOAD Jan 14 01:29:49.116122 kernel: audit: type=1334 audit(1768354189.104:147): prog-id=22 op=UNLOAD Jan 14 01:29:49.104000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:29:49.120442 kernel: audit: type=1334 audit(1768354189.105:148): prog-id=29 op=LOAD Jan 14 01:29:49.105000 audit: BPF prog-id=29 op=LOAD Jan 14 01:29:49.124661 kernel: audit: type=1334 audit(1768354189.105:149): prog-id=30 op=LOAD Jan 14 01:29:49.105000 audit: BPF prog-id=30 op=LOAD Jan 14 01:29:49.105000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:29:49.105000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:29:49.109000 audit: BPF prog-id=31 op=LOAD Jan 14 01:29:49.109000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:29:49.109000 audit: BPF prog-id=32 op=LOAD Jan 14 01:29:49.109000 audit: BPF prog-id=33 op=LOAD Jan 14 01:29:49.109000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:29:49.109000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:29:49.111000 audit: BPF prog-id=34 op=LOAD Jan 14 01:29:49.111000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:29:49.115000 audit: BPF prog-id=35 op=LOAD Jan 14 01:29:49.115000 audit: BPF prog-id=36 op=LOAD Jan 14 01:29:49.115000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:29:49.115000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:29:49.116000 audit: BPF prog-id=37 op=LOAD Jan 14 01:29:49.116000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:29:49.120000 audit: BPF prog-id=38 op=LOAD Jan 14 01:29:49.120000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:29:49.120000 audit: BPF prog-id=39 op=LOAD Jan 14 01:29:49.120000 audit: BPF prog-id=40 op=LOAD Jan 14 01:29:49.120000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:29:49.120000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:29:49.129489 systemd[1]: Reload requested from client PID 1374 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:29:49.129538 systemd[1]: Reloading... Jan 14 01:29:49.142336 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:29:49.142787 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:29:49.143265 systemd-tmpfiles[1375]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:29:49.145695 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Jan 14 01:29:49.145959 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Jan 14 01:29:49.156464 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:29:49.156673 systemd-tmpfiles[1375]: Skipping /boot Jan 14 01:29:49.189285 systemd-tmpfiles[1375]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:29:49.189521 systemd-tmpfiles[1375]: Skipping /boot Jan 14 01:29:49.243951 zram_generator::config[1407]: No configuration found. Jan 14 01:29:49.479050 systemd[1]: Reloading finished in 349 ms. Jan 14 01:29:49.504266 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:29:49.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:49.513000 audit: BPF prog-id=41 op=LOAD Jan 14 01:29:49.513000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:29:49.513000 audit: BPF prog-id=42 op=LOAD Jan 14 01:29:49.513000 audit: BPF prog-id=43 op=LOAD Jan 14 01:29:49.513000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:29:49.514000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:29:49.514000 audit: BPF prog-id=44 op=LOAD Jan 14 01:29:49.514000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:29:49.515000 audit: BPF prog-id=45 op=LOAD Jan 14 01:29:49.515000 audit: BPF prog-id=46 op=LOAD Jan 14 01:29:49.515000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:29:49.515000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:29:49.517000 audit: BPF prog-id=47 op=LOAD Jan 14 01:29:49.517000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:29:49.517000 audit: BPF prog-id=48 op=LOAD Jan 14 01:29:49.517000 audit: BPF prog-id=49 op=LOAD Jan 14 01:29:49.517000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:29:49.517000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:29:49.520000 audit: BPF prog-id=50 op=LOAD Jan 14 01:29:49.541000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:29:49.542000 audit: BPF prog-id=51 op=LOAD Jan 14 01:29:49.542000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:29:49.542000 audit: BPF prog-id=52 op=LOAD Jan 14 01:29:49.542000 audit: BPF prog-id=53 op=LOAD Jan 14 01:29:49.542000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:29:49.542000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:29:49.547688 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:29:49.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:49.566134 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:29:49.572621 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:29:49.580549 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:29:49.607319 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:29:49.612000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:29:49.612000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:29:49.613000 audit: BPF prog-id=54 op=LOAD Jan 14 01:29:49.613000 audit: BPF prog-id=55 op=LOAD Jan 14 01:29:49.619261 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:29:49.627361 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:29:49.637487 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:29:49.637709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:29:49.642147 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:29:49.647633 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:29:49.654148 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:29:49.658773 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:29:49.659269 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:29:49.659451 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:29:49.659684 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:29:49.665317 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:29:49.665520 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:29:49.665768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:29:49.666018 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:29:49.666099 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:29:49.666173 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:29:49.666000 audit[1457]: SYSTEM_BOOT pid=1457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 01:29:49.673967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:29:49.674341 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:29:49.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:49.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:49.689335 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:29:49.689713 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:29:49.697398 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:29:49.710432 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:29:49.715617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:29:49.716143 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:29:49.716288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:29:49.716462 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:29:49.721201 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:29:49.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:29:49.729745 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:29:49.734142 augenrules[1477]: No rules Jan 14 01:29:49.732000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:29:49.732000 audit[1477]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe794b34f0 a2=420 a3=0 items=0 ppid=1446 pid=1477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:29:49.732000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:29:49.737089 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:29:49.738044 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:29:49.742193 systemd-udevd[1451]: Using default interface naming scheme 'v257'. Jan 14 01:29:49.744042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:29:49.744340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:29:49.750389 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:29:49.751017 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:29:49.756601 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:29:49.757028 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:29:49.762182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:29:49.762462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:29:49.773465 systemd[1]: Finished ensure-sysext.service. Jan 14 01:29:49.778991 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:29:49.798826 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:29:49.799077 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:29:49.802243 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 01:29:49.808235 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:29:49.819642 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:29:49.835086 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:29:50.012784 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:29:50.044128 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 01:29:50.052812 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:29:50.119148 systemd-networkd[1498]: lo: Link UP Jan 14 01:29:50.119182 systemd-networkd[1498]: lo: Gained carrier Jan 14 01:29:50.120695 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:29:50.125026 systemd[1]: Reached target network.target - Network. Jan 14 01:29:50.130475 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:29:50.141238 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:29:50.369299 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:29:50.375987 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:29:50.395311 systemd-networkd[1498]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:29:50.395364 systemd-networkd[1498]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:29:50.402294 systemd-networkd[1498]: eth0: Link UP Jan 14 01:29:50.403903 systemd-networkd[1498]: eth0: Gained carrier Jan 14 01:29:50.403931 systemd-networkd[1498]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:29:50.420923 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 14 01:29:50.423102 systemd-networkd[1498]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 01:29:50.423554 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:29:50.429242 systemd-timesyncd[1491]: Network configuration changed, trying to establish connection. Jan 14 01:29:51.716687 systemd-timesyncd[1491]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 14 01:29:51.716789 systemd-timesyncd[1491]: Initial clock synchronization to Wed 2026-01-14 01:29:51.716127 UTC. Jan 14 01:29:51.720881 systemd-resolved[1286]: Clock change detected. Flushing caches. Jan 14 01:29:51.727676 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:29:51.753348 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 14 01:29:51.761328 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 01:29:51.761748 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 01:29:51.824279 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:29:51.843979 kernel: ACPI: button: Power Button [PWRF] Jan 14 01:29:52.262533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:29:52.296232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:29:52.297063 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:29:52.307576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:29:52.417354 kernel: kvm_amd: TSC scaling supported Jan 14 01:29:52.417425 kernel: kvm_amd: Nested Virtualization enabled Jan 14 01:29:52.417441 kernel: kvm_amd: Nested Paging enabled Jan 14 01:29:52.421000 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 14 01:29:52.422351 kernel: kvm_amd: PMU virtualization is disabled Jan 14 01:29:52.484227 ldconfig[1448]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:29:52.506435 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:29:52.525104 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:29:52.552368 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:29:52.573024 kernel: EDAC MC: Ver: 3.0.0 Jan 14 01:29:52.585714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:29:52.594390 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:29:52.600562 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:29:52.607489 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:29:52.614673 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:29:52.621776 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:29:52.627974 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:29:52.634734 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:29:52.642034 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:29:52.646574 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:29:52.655137 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:29:52.655259 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:29:52.710296 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:29:52.931185 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:29:52.943274 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:29:52.954974 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:29:52.959842 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:29:52.964106 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:29:52.971961 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:29:52.976533 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:29:52.982322 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:29:52.987543 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:29:52.992050 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:29:52.995423 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:29:52.995489 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:29:52.997285 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:29:53.002530 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:29:53.007438 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:29:53.016031 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:29:53.030307 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:29:53.034208 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:29:53.036321 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:29:53.038032 jq[1566]: false Jan 14 01:29:53.042353 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:29:53.069523 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:29:53.076199 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:29:53.081726 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:29:53.082962 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing passwd entry cache Jan 14 01:29:53.083200 oslogin_cache_refresh[1568]: Refreshing passwd entry cache Jan 14 01:29:53.089021 extend-filesystems[1567]: Found /dev/vda6 Jan 14 01:29:53.094164 extend-filesystems[1567]: Found /dev/vda9 Jan 14 01:29:53.099329 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:29:53.106134 extend-filesystems[1567]: Checking size of /dev/vda9 Jan 14 01:29:53.104080 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 01:29:53.105986 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:29:53.107060 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:29:53.115103 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting users, quitting Jan 14 01:29:53.115157 oslogin_cache_refresh[1568]: Failure getting users, quitting Jan 14 01:29:53.115220 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:29:53.115246 oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:29:53.115326 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing group entry cache Jan 14 01:29:53.115369 oslogin_cache_refresh[1568]: Refreshing group entry cache Jan 14 01:29:53.120103 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:29:53.123500 extend-filesystems[1567]: Resized partition /dev/vda9 Jan 14 01:29:53.131032 extend-filesystems[1593]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:29:53.138330 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 14 01:29:53.138581 jq[1590]: true Jan 14 01:29:53.139171 oslogin_cache_refresh[1568]: Failure getting groups, quitting Jan 14 01:29:53.139793 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting groups, quitting Jan 14 01:29:53.139793 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:29:53.139185 oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:29:53.160202 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:29:53.167098 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:29:53.167524 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:29:53.168050 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:29:53.168355 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:29:53.175708 update_engine[1586]: I20260114 01:29:53.175531 1586 main.cc:92] Flatcar Update Engine starting Jan 14 01:29:53.175716 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:29:53.176383 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:29:53.217542 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 14 01:29:53.186258 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:29:53.186709 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:29:53.219273 extend-filesystems[1593]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 01:29:53.219273 extend-filesystems[1593]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 14 01:29:53.219273 extend-filesystems[1593]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 14 01:29:53.241277 extend-filesystems[1567]: Resized filesystem in /dev/vda9 Jan 14 01:29:53.222498 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:29:53.224711 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:29:53.274028 jq[1599]: true Jan 14 01:29:53.295607 tar[1595]: linux-amd64/LICENSE Jan 14 01:29:53.298865 tar[1595]: linux-amd64/helm Jan 14 01:29:53.499824 dbus-daemon[1564]: [system] SELinux support is enabled Jan 14 01:29:53.502093 systemd-logind[1583]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 01:29:53.502127 systemd-logind[1583]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 01:29:53.505085 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:29:53.511410 systemd-logind[1583]: New seat seat0. Jan 14 01:29:53.524126 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:29:53.550383 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:29:53.524177 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:29:53.533696 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:29:53.533723 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:29:53.541841 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:29:53.553268 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:29:53.559439 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 01:29:53.566746 dbus-daemon[1564]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 01:29:53.569590 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:29:53.575493 update_engine[1586]: I20260114 01:29:53.573877 1586 update_check_scheduler.cc:74] Next update check in 5m33s Jan 14 01:29:53.584166 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:29:53.948846 systemd-networkd[1498]: eth0: Gained IPv6LL Jan 14 01:29:53.998873 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:29:54.025513 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:29:54.033205 locksmithd[1635]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:29:54.037436 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:29:54.115047 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:29:54.627982 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 14 01:29:54.916702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:29:55.035342 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:29:55.106479 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:29:55.213550 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:29:55.274760 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:42466.service - OpenSSH per-connection server daemon (10.0.0.1:42466). Jan 14 01:29:55.311521 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:29:55.312178 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:29:55.318439 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 14 01:29:55.318844 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 14 01:29:55.325118 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:29:55.333311 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:29:55.336135 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:29:55.352429 containerd[1601]: time="2026-01-14T01:29:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:29:55.357588 containerd[1601]: time="2026-01-14T01:29:55.356577310Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:29:55.373597 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:29:55.384801 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:29:55.392169 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:29:55.398747 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:29:55.399405 containerd[1601]: time="2026-01-14T01:29:55.399356801Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.839µs" Jan 14 01:29:55.399530 containerd[1601]: time="2026-01-14T01:29:55.399515047Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:29:55.399783 containerd[1601]: time="2026-01-14T01:29:55.399758351Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:29:55.399867 containerd[1601]: time="2026-01-14T01:29:55.399849360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:29:55.400353 containerd[1601]: time="2026-01-14T01:29:55.400333244Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:29:55.400462 containerd[1601]: time="2026-01-14T01:29:55.400447717Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:29:55.400572 containerd[1601]: time="2026-01-14T01:29:55.400556731Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:29:55.400697 containerd[1601]: time="2026-01-14T01:29:55.400681775Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:29:55.401139 containerd[1601]: time="2026-01-14T01:29:55.401116455Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:29:55.401197 containerd[1601]: time="2026-01-14T01:29:55.401184753Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:29:55.401239 containerd[1601]: time="2026-01-14T01:29:55.401228124Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:29:55.401276 containerd[1601]: time="2026-01-14T01:29:55.401266436Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:29:55.401587 containerd[1601]: time="2026-01-14T01:29:55.401568840Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:29:55.401691 containerd[1601]: time="2026-01-14T01:29:55.401634622Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:29:55.401827 containerd[1601]: time="2026-01-14T01:29:55.401812445Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:29:55.402211 containerd[1601]: time="2026-01-14T01:29:55.402191131Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:29:55.402323 containerd[1601]: time="2026-01-14T01:29:55.402306808Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:29:55.402367 containerd[1601]: time="2026-01-14T01:29:55.402356520Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:29:55.402554 containerd[1601]: time="2026-01-14T01:29:55.402538029Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:29:55.403849 containerd[1601]: time="2026-01-14T01:29:55.403825893Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:29:55.404115 containerd[1601]: time="2026-01-14T01:29:55.404091869Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:29:55.413074 containerd[1601]: time="2026-01-14T01:29:55.413044820Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:29:55.413311 containerd[1601]: time="2026-01-14T01:29:55.413293094Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:29:55.413505 containerd[1601]: time="2026-01-14T01:29:55.413484861Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:29:55.413556 containerd[1601]: time="2026-01-14T01:29:55.413544784Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:29:55.413602 containerd[1601]: time="2026-01-14T01:29:55.413591040Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:29:55.413734 containerd[1601]: time="2026-01-14T01:29:55.413717526Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:29:55.413785 containerd[1601]: time="2026-01-14T01:29:55.413774192Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:29:55.413827 containerd[1601]: time="2026-01-14T01:29:55.413816400Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:29:55.413867 containerd[1601]: time="2026-01-14T01:29:55.413857778Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:29:55.414003 containerd[1601]: time="2026-01-14T01:29:55.413987680Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:29:55.414064 containerd[1601]: time="2026-01-14T01:29:55.414053223Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:29:55.414129 containerd[1601]: time="2026-01-14T01:29:55.414108385Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:29:55.414198 containerd[1601]: time="2026-01-14T01:29:55.414180851Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:29:55.414269 containerd[1601]: time="2026-01-14T01:29:55.414253296Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:29:55.414487 containerd[1601]: time="2026-01-14T01:29:55.414468738Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:29:55.414552 containerd[1601]: time="2026-01-14T01:29:55.414540603Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:29:55.414599 containerd[1601]: time="2026-01-14T01:29:55.414589023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:29:55.414730 containerd[1601]: time="2026-01-14T01:29:55.414714096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:29:55.414781 containerd[1601]: time="2026-01-14T01:29:55.414769730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:29:55.414820 containerd[1601]: time="2026-01-14T01:29:55.414810837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:29:55.414862 containerd[1601]: time="2026-01-14T01:29:55.414852164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:29:55.415015 containerd[1601]: time="2026-01-14T01:29:55.414996694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:29:55.415119 containerd[1601]: time="2026-01-14T01:29:55.415105066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:29:55.415165 containerd[1601]: time="2026-01-14T01:29:55.415155179Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:29:55.415208 containerd[1601]: time="2026-01-14T01:29:55.415195414Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:29:55.415294 containerd[1601]: time="2026-01-14T01:29:55.415276015Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:29:55.415474 containerd[1601]: time="2026-01-14T01:29:55.415456001Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:29:55.415564 containerd[1601]: time="2026-01-14T01:29:55.415551529Z" level=info msg="Start snapshots syncer" Jan 14 01:29:55.415746 containerd[1601]: time="2026-01-14T01:29:55.415729651Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:29:55.416562 containerd[1601]: time="2026-01-14T01:29:55.416504829Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:29:55.416847 containerd[1601]: time="2026-01-14T01:29:55.416830015Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:29:55.417097 containerd[1601]: time="2026-01-14T01:29:55.417079451Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:29:55.417250 containerd[1601]: time="2026-01-14T01:29:55.417234330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:29:55.417304 containerd[1601]: time="2026-01-14T01:29:55.417293651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:29:55.417361 containerd[1601]: time="2026-01-14T01:29:55.417348804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:29:55.417404 containerd[1601]: time="2026-01-14T01:29:55.417392816Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:29:55.417475 containerd[1601]: time="2026-01-14T01:29:55.417458018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:29:55.417548 containerd[1601]: time="2026-01-14T01:29:55.417531695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:29:55.417616 containerd[1601]: time="2026-01-14T01:29:55.417603569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:29:55.417718 containerd[1601]: time="2026-01-14T01:29:55.417704778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:29:55.417777 containerd[1601]: time="2026-01-14T01:29:55.417765311Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:29:55.417980 containerd[1601]: time="2026-01-14T01:29:55.417878673Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:29:55.418038 containerd[1601]: time="2026-01-14T01:29:55.418024866Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:29:55.418122 containerd[1601]: time="2026-01-14T01:29:55.418109063Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:29:55.418838 containerd[1601]: time="2026-01-14T01:29:55.418762135Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:29:55.418838 containerd[1601]: time="2026-01-14T01:29:55.418827617Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:29:55.418977 containerd[1601]: time="2026-01-14T01:29:55.418848867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:29:55.420058 containerd[1601]: time="2026-01-14T01:29:55.419984566Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:29:55.420133 containerd[1601]: time="2026-01-14T01:29:55.420094662Z" level=info msg="runtime interface created" Jan 14 01:29:55.420133 containerd[1601]: time="2026-01-14T01:29:55.420125370Z" level=info msg="created NRI interface" Jan 14 01:29:55.420173 containerd[1601]: time="2026-01-14T01:29:55.420139195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:29:55.420173 containerd[1601]: time="2026-01-14T01:29:55.420162839Z" level=info msg="Connect containerd service" Jan 14 01:29:55.420246 containerd[1601]: time="2026-01-14T01:29:55.420207112Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:29:55.428477 containerd[1601]: time="2026-01-14T01:29:55.428419932Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:29:55.501821 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 42466 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:29:55.512738 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:29:55.531983 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:29:55.541256 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:29:55.572282 systemd-logind[1583]: New session 1 of user core. Jan 14 01:29:55.597870 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:29:55.609394 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:29:55.621693 containerd[1601]: time="2026-01-14T01:29:55.621616920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:29:55.622063 containerd[1601]: time="2026-01-14T01:29:55.621807045Z" level=info msg="Start subscribing containerd event" Jan 14 01:29:55.622445 containerd[1601]: time="2026-01-14T01:29:55.622427502Z" level=info msg="Start recovering state" Jan 14 01:29:55.622739 containerd[1601]: time="2026-01-14T01:29:55.622717524Z" level=info msg="Start event monitor" Jan 14 01:29:55.623022 containerd[1601]: time="2026-01-14T01:29:55.622863867Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:29:55.623174 containerd[1601]: time="2026-01-14T01:29:55.623152426Z" level=info msg="Start streaming server" Jan 14 01:29:55.627783 containerd[1601]: time="2026-01-14T01:29:55.623302696Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:29:55.628063 containerd[1601]: time="2026-01-14T01:29:55.627838693Z" level=info msg="runtime interface starting up..." Jan 14 01:29:55.628063 containerd[1601]: time="2026-01-14T01:29:55.627852068Z" level=info msg="starting plugins..." Jan 14 01:29:55.628063 containerd[1601]: time="2026-01-14T01:29:55.627870873Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:29:55.628276 containerd[1601]: time="2026-01-14T01:29:55.628250211Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:29:55.628750 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:29:55.635053 (systemd)[1703]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:29:55.637181 containerd[1601]: time="2026-01-14T01:29:55.636057303Z" level=info msg="containerd successfully booted in 0.286197s" Jan 14 01:29:55.642196 systemd-logind[1583]: New session 2 of user core. Jan 14 01:29:55.692970 tar[1595]: linux-amd64/README.md Jan 14 01:29:55.720112 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:29:56.177797 systemd[1703]: Queued start job for default target default.target. Jan 14 01:29:56.187326 systemd[1703]: Created slice app.slice - User Application Slice. Jan 14 01:29:56.187401 systemd[1703]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:29:56.187420 systemd[1703]: Reached target paths.target - Paths. Jan 14 01:29:56.187812 systemd[1703]: Reached target timers.target - Timers. Jan 14 01:29:56.191071 systemd[1703]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:29:56.192853 systemd[1703]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:29:56.234087 systemd[1703]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:29:56.234215 systemd[1703]: Reached target sockets.target - Sockets. Jan 14 01:29:56.269550 systemd[1703]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:29:56.269789 systemd[1703]: Reached target basic.target - Basic System. Jan 14 01:29:56.270025 systemd[1703]: Reached target default.target - Main User Target. Jan 14 01:29:56.270081 systemd[1703]: Startup finished in 610ms. Jan 14 01:29:56.270166 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:29:56.284163 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:29:56.433008 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:59212.service - OpenSSH per-connection server daemon (10.0.0.1:59212). Jan 14 01:29:56.560865 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 59212 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:29:56.564345 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:29:56.578432 systemd-logind[1583]: New session 3 of user core. Jan 14 01:29:56.593225 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 01:29:56.618750 sshd[1724]: Connection closed by 10.0.0.1 port 59212 Jan 14 01:29:56.619422 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jan 14 01:29:56.632312 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:59212.service: Deactivated successfully. Jan 14 01:29:56.635409 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 01:29:56.636957 systemd-logind[1583]: Session 3 logged out. Waiting for processes to exit. Jan 14 01:29:56.642190 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:59224.service - OpenSSH per-connection server daemon (10.0.0.1:59224). Jan 14 01:29:56.648232 systemd-logind[1583]: Removed session 3. Jan 14 01:29:56.747377 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 59224 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:29:56.754976 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:29:56.763372 systemd-logind[1583]: New session 4 of user core. Jan 14 01:29:56.776124 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:29:56.801681 sshd[1734]: Connection closed by 10.0.0.1 port 59224 Jan 14 01:29:56.804164 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Jan 14 01:29:56.809397 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:59224.service: Deactivated successfully. Jan 14 01:29:56.811593 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 01:29:56.812809 systemd-logind[1583]: Session 4 logged out. Waiting for processes to exit. Jan 14 01:29:56.814420 systemd-logind[1583]: Removed session 4. Jan 14 01:29:58.850648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:29:58.867848 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:29:58.873882 systemd[1]: Startup finished in 7.705s (kernel) + 19.073s (initrd) + 12.850s (userspace) = 39.629s. Jan 14 01:29:58.904800 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:30:01.638456 kubelet[1743]: E0114 01:30:01.637809 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:30:01.648442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:30:01.649484 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:30:01.652653 systemd[1]: kubelet.service: Consumed 4.622s CPU time, 269M memory peak. Jan 14 01:30:06.873059 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:47276.service - OpenSSH per-connection server daemon (10.0.0.1:47276). Jan 14 01:30:07.198610 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 47276 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:30:07.213248 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:30:07.266382 systemd-logind[1583]: New session 5 of user core. Jan 14 01:30:07.292311 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:30:07.391500 sshd[1761]: Connection closed by 10.0.0.1 port 47276 Jan 14 01:30:07.393991 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jan 14 01:30:07.415636 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:47276.service: Deactivated successfully. Jan 14 01:30:07.423399 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:30:07.432179 systemd-logind[1583]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:30:07.433543 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:47278.service - OpenSSH per-connection server daemon (10.0.0.1:47278). Jan 14 01:30:07.448498 systemd-logind[1583]: Removed session 5. Jan 14 01:30:07.648093 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 47278 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:30:07.654319 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:30:07.682816 systemd-logind[1583]: New session 6 of user core. Jan 14 01:30:07.698290 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:30:07.747843 sshd[1771]: Connection closed by 10.0.0.1 port 47278 Jan 14 01:30:07.749582 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jan 14 01:30:07.771076 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:47278.service: Deactivated successfully. Jan 14 01:30:07.775886 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:30:07.778097 systemd-logind[1583]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:30:07.786486 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:47292.service - OpenSSH per-connection server daemon (10.0.0.1:47292). Jan 14 01:30:07.790123 systemd-logind[1583]: Removed session 6. Jan 14 01:30:07.994069 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 47292 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:30:08.001211 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:30:08.048030 systemd-logind[1583]: New session 7 of user core. Jan 14 01:30:08.080577 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:30:08.305574 sshd[1782]: Connection closed by 10.0.0.1 port 47292 Jan 14 01:30:08.305593 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jan 14 01:30:08.331273 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:47292.service: Deactivated successfully. Jan 14 01:30:08.333728 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:30:08.343507 systemd-logind[1583]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:30:08.348492 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:47306.service - OpenSSH per-connection server daemon (10.0.0.1:47306). Jan 14 01:30:08.357881 systemd-logind[1583]: Removed session 7. Jan 14 01:30:08.481043 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 47306 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:30:08.484085 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:30:08.509403 systemd-logind[1583]: New session 8 of user core. Jan 14 01:30:08.529163 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:30:08.664660 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:30:08.665602 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:30:08.700556 sudo[1793]: pam_unix(sudo:session): session closed for user root Jan 14 01:30:08.708966 sshd[1792]: Connection closed by 10.0.0.1 port 47306 Jan 14 01:30:08.710292 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jan 14 01:30:08.732433 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:47306.service: Deactivated successfully. Jan 14 01:30:08.747520 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:30:08.764852 systemd-logind[1583]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:30:08.770584 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:47320.service - OpenSSH per-connection server daemon (10.0.0.1:47320). Jan 14 01:30:08.778224 systemd-logind[1583]: Removed session 8. Jan 14 01:30:09.092830 sshd[1800]: Accepted publickey for core from 10.0.0.1 port 47320 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:30:09.095334 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:30:09.133102 systemd-logind[1583]: New session 9 of user core. Jan 14 01:30:09.147361 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:30:09.229353 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:30:09.230419 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:30:09.278396 sudo[1806]: pam_unix(sudo:session): session closed for user root Jan 14 01:30:09.308495 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:30:09.309177 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:30:09.342693 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:30:09.577000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:30:09.592089 kernel: kauditd_printk_skb: 61 callbacks suppressed Jan 14 01:30:09.592193 kernel: audit: type=1305 audit(1768354209.577:209): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:30:09.589583 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:30:09.592322 augenrules[1830]: No rules Jan 14 01:30:09.590260 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:30:09.596651 sudo[1805]: pam_unix(sudo:session): session closed for user root Jan 14 01:30:09.603056 kernel: audit: type=1300 audit(1768354209.577:209): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8081e910 a2=420 a3=0 items=0 ppid=1811 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:09.577000 audit[1830]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8081e910 a2=420 a3=0 items=0 ppid=1811 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:09.605886 sshd[1804]: Connection closed by 10.0.0.1 port 47320 Jan 14 01:30:09.609018 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Jan 14 01:30:09.577000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:30:09.674355 kernel: audit: type=1327 audit(1768354209.577:209): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:30:09.674433 kernel: audit: type=1130 audit(1768354209.589:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:09.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:09.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:09.696168 kernel: audit: type=1131 audit(1768354209.589:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:09.592000 audit[1805]: USER_END pid=1805 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:30:09.736070 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:47320.service: Deactivated successfully. Jan 14 01:30:09.740123 kernel: audit: type=1106 audit(1768354209.592:212): pid=1805 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:30:09.740200 kernel: audit: type=1104 audit(1768354209.596:213): pid=1805 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:30:09.596000 audit[1805]: CRED_DISP pid=1805 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:30:09.745556 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:30:09.756644 systemd-logind[1583]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:30:09.763863 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:47326.service - OpenSSH per-connection server daemon (10.0.0.1:47326). Jan 14 01:30:09.768355 systemd-logind[1583]: Removed session 9. Jan 14 01:30:09.617000 audit[1800]: USER_END pid=1800 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:30:09.784318 kernel: audit: type=1106 audit(1768354209.617:214): pid=1800 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:30:09.617000 audit[1800]: CRED_DISP pid=1800 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:30:09.848344 kernel: audit: type=1104 audit(1768354209.617:215): pid=1800 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:30:09.848466 kernel: audit: type=1131 audit(1768354209.735:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.15:22-10.0.0.1:47320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:09.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.15:22-10.0.0.1:47320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:09.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.15:22-10.0.0.1:47326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:10.162000 audit[1839]: USER_ACCT pid=1839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:30:10.172630 sshd[1839]: Accepted publickey for core from 10.0.0.1 port 47326 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:30:10.171000 audit[1839]: CRED_ACQ pid=1839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:30:10.172000 audit[1839]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc03a531e0 a2=3 a3=0 items=0 ppid=1 pid=1839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:10.172000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:30:10.174154 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:30:10.207163 systemd-logind[1583]: New session 10 of user core. Jan 14 01:30:10.224442 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:30:10.241000 audit[1839]: USER_START pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:30:10.262000 audit[1843]: CRED_ACQ pid=1843 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:30:10.321000 audit[1844]: USER_ACCT pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:30:10.325577 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:30:10.326400 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:30:10.325000 audit[1844]: CRED_REFR pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:30:10.325000 audit[1844]: USER_START pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:30:11.852704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:30:11.866588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:30:14.406424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:30:14.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:14.574356 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:30:15.407784 kubelet[1874]: E0114 01:30:15.407151 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:30:15.427379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:30:15.427797 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:30:15.460071 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 01:30:15.462263 kernel: audit: type=1131 audit(1768354215.428:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:30:15.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:30:15.431677 systemd[1]: kubelet.service: Consumed 1.861s CPU time, 110.1M memory peak. Jan 14 01:30:15.577576 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:30:15.661551 (dockerd)[1885]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:30:19.370675 dockerd[1885]: time="2026-01-14T01:30:19.366345532Z" level=info msg="Starting up" Jan 14 01:30:19.395769 dockerd[1885]: time="2026-01-14T01:30:19.395551471Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:30:19.668729 dockerd[1885]: time="2026-01-14T01:30:19.665139853Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:30:19.992793 dockerd[1885]: time="2026-01-14T01:30:19.987594427Z" level=info msg="Loading containers: start." Jan 14 01:30:20.096197 kernel: Initializing XFRM netlink socket Jan 14 01:30:20.654000 audit[1938]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.700411 kernel: audit: type=1325 audit(1768354220.654:228): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.700572 kernel: audit: type=1300 audit(1768354220.654:228): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff64b4fde0 a2=0 a3=0 items=0 ppid=1885 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.654000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff64b4fde0 a2=0 a3=0 items=0 ppid=1885 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.701293 kernel: audit: type=1327 audit(1768354220.654:228): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:30:20.654000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:30:20.672000 audit[1940]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.725433 kernel: audit: type=1325 audit(1768354220.672:229): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.725493 kernel: audit: type=1300 audit(1768354220.672:229): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffe8187f70 a2=0 a3=0 items=0 ppid=1885 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.672000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffe8187f70 a2=0 a3=0 items=0 ppid=1885 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.751659 kernel: audit: type=1327 audit(1768354220.672:229): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:30:20.672000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:30:20.684000 audit[1942]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.684000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeeeee7f70 a2=0 a3=0 items=0 ppid=1885 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.799290 kernel: audit: type=1325 audit(1768354220.684:230): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.799482 kernel: audit: type=1300 audit(1768354220.684:230): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeeeee7f70 a2=0 a3=0 items=0 ppid=1885 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.799533 kernel: audit: type=1327 audit(1768354220.684:230): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:30:20.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:30:20.813215 kernel: audit: type=1325 audit(1768354220.693:231): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.693000 audit[1944]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.693000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6af55750 a2=0 a3=0 items=0 ppid=1885 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.693000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:30:20.710000 audit[1946]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.710000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc3e0c39c0 a2=0 a3=0 items=0 ppid=1885 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:30:20.723000 audit[1948]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.723000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe3dad1120 a2=0 a3=0 items=0 ppid=1885 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.723000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:30:20.743000 audit[1950]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.743000 audit[1950]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd5c2e0c00 a2=0 a3=0 items=0 ppid=1885 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.743000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:30:20.769000 audit[1952]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.769000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffda30978a0 a2=0 a3=0 items=0 ppid=1885 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.769000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:30:20.925000 audit[1955]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.925000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffffa7b1680 a2=0 a3=0 items=0 ppid=1885 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.925000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:30:20.936000 audit[1957]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.936000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd6c1bacc0 a2=0 a3=0 items=0 ppid=1885 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:30:20.948000 audit[1959]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.948000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffde134cd40 a2=0 a3=0 items=0 ppid=1885 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.948000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:30:20.960000 audit[1961]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.960000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc4d50f290 a2=0 a3=0 items=0 ppid=1885 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.960000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:30:20.968000 audit[1963]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:20.968000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd3eca4340 a2=0 a3=0 items=0 ppid=1885 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:20.968000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:30:21.129000 audit[1993]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.129000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe1291b160 a2=0 a3=0 items=0 ppid=1885 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.129000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:30:21.142000 audit[1995]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.142000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff1b822ad0 a2=0 a3=0 items=0 ppid=1885 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.142000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:30:21.155000 audit[1997]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.155000 audit[1997]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb09f7900 a2=0 a3=0 items=0 ppid=1885 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:30:21.163000 audit[1999]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.163000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffef9eaf80 a2=0 a3=0 items=0 ppid=1885 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.163000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:30:21.171000 audit[2001]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.171000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe7218cfa0 a2=0 a3=0 items=0 ppid=1885 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:30:21.182000 audit[2003]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.182000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcf5505930 a2=0 a3=0 items=0 ppid=1885 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.182000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:30:21.195000 audit[2005]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.195000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe91541040 a2=0 a3=0 items=0 ppid=1885 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.195000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:30:21.212000 audit[2007]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.212000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc09477190 a2=0 a3=0 items=0 ppid=1885 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.212000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:30:21.229000 audit[2009]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.229000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc79e1b270 a2=0 a3=0 items=0 ppid=1885 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.229000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:30:21.239000 audit[2011]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.239000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe36774460 a2=0 a3=0 items=0 ppid=1885 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.239000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:30:21.249000 audit[2013]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.249000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff00164f50 a2=0 a3=0 items=0 ppid=1885 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.249000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:30:21.261000 audit[2015]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.261000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffa7696580 a2=0 a3=0 items=0 ppid=1885 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.261000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:30:21.270000 audit[2017]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.270000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe58d22580 a2=0 a3=0 items=0 ppid=1885 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:30:21.298000 audit[2022]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:21.298000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffef731c630 a2=0 a3=0 items=0 ppid=1885 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.298000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:30:21.314000 audit[2024]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:21.314000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff5b08cea0 a2=0 a3=0 items=0 ppid=1885 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.314000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:30:21.326000 audit[2026]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:21.326000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc7df27200 a2=0 a3=0 items=0 ppid=1885 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.326000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:30:21.343000 audit[2028]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.343000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffeb7fc730 a2=0 a3=0 items=0 ppid=1885 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.343000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:30:21.361000 audit[2030]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.361000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd49948770 a2=0 a3=0 items=0 ppid=1885 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.361000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:30:21.375000 audit[2032]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:30:21.375000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffa91fc7f0 a2=0 a3=0 items=0 ppid=1885 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.375000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:30:21.461000 audit[2036]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:21.461000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffeed7a13b0 a2=0 a3=0 items=0 ppid=1885 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.461000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:30:21.470000 audit[2038]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:21.470000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffda675e3e0 a2=0 a3=0 items=0 ppid=1885 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.470000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:30:21.550000 audit[2046]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:21.550000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd2ba2d1a0 a2=0 a3=0 items=0 ppid=1885 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.550000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:30:21.663000 audit[2052]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:21.663000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff613ef550 a2=0 a3=0 items=0 ppid=1885 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:30:21.680000 audit[2054]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:21.680000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd59330030 a2=0 a3=0 items=0 ppid=1885 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:30:21.694000 audit[2056]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:21.694000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc65c24d10 a2=0 a3=0 items=0 ppid=1885 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:30:21.720000 audit[2058]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:21.720000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcd3ebdd30 a2=0 a3=0 items=0 ppid=1885 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:30:21.729000 audit[2060]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:30:21.729000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffa9843080 a2=0 a3=0 items=0 ppid=1885 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:30:21.729000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:30:21.731796 systemd-networkd[1498]: docker0: Link UP Jan 14 01:30:21.764109 dockerd[1885]: time="2026-01-14T01:30:21.761354269Z" level=info msg="Loading containers: done." Jan 14 01:30:21.835403 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1793858828-merged.mount: Deactivated successfully. Jan 14 01:30:21.872992 dockerd[1885]: time="2026-01-14T01:30:21.872715340Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:30:21.873678 dockerd[1885]: time="2026-01-14T01:30:21.873084118Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:30:21.873678 dockerd[1885]: time="2026-01-14T01:30:21.873346668Z" level=info msg="Initializing buildkit" Jan 14 01:30:22.043347 dockerd[1885]: time="2026-01-14T01:30:22.043104449Z" level=info msg="Completed buildkit initialization" Jan 14 01:30:22.063844 dockerd[1885]: time="2026-01-14T01:30:22.063121947Z" level=info msg="Daemon has completed initialization" Jan 14 01:30:22.065371 dockerd[1885]: time="2026-01-14T01:30:22.064030019Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:30:22.068056 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:30:22.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:25.601517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 01:30:25.608692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:30:26.188355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:30:26.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:26.194535 kernel: kauditd_printk_skb: 111 callbacks suppressed Jan 14 01:30:26.194625 kernel: audit: type=1130 audit(1768354226.188:269): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:26.222770 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:30:26.638211 kubelet[2111]: E0114 01:30:26.638023 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:30:26.643542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:30:26.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:30:26.644051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:30:26.644671 systemd[1]: kubelet.service: Consumed 802ms CPU time, 108.6M memory peak. Jan 14 01:30:26.667669 kernel: audit: type=1131 audit(1768354226.642:270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:30:27.088161 containerd[1601]: time="2026-01-14T01:30:27.084709837Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 01:30:29.005485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3934846294.mount: Deactivated successfully. Jan 14 01:30:34.828112 containerd[1601]: time="2026-01-14T01:30:34.827535796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:34.830485 containerd[1601]: time="2026-01-14T01:30:34.829805593Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Jan 14 01:30:34.833120 containerd[1601]: time="2026-01-14T01:30:34.832717167Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:34.842520 containerd[1601]: time="2026-01-14T01:30:34.842248343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:34.845414 containerd[1601]: time="2026-01-14T01:30:34.845052534Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 7.759910441s" Jan 14 01:30:34.845414 containerd[1601]: time="2026-01-14T01:30:34.845138885Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 14 01:30:34.868516 containerd[1601]: time="2026-01-14T01:30:34.868126105Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 01:30:36.859242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 01:30:36.865805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:30:37.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:37.791122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:30:37.840040 kernel: audit: type=1130 audit(1768354237.790:271): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:37.858081 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:30:38.746091 kubelet[2192]: E0114 01:30:38.744806 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:30:38.772191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:30:38.773243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:30:38.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:30:38.777016 systemd[1]: kubelet.service: Consumed 1.325s CPU time, 110.3M memory peak. Jan 14 01:30:38.823528 kernel: audit: type=1131 audit(1768354238.775:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:30:39.263164 update_engine[1586]: I20260114 01:30:39.262063 1586 update_attempter.cc:509] Updating boot flags... Jan 14 01:30:40.321106 containerd[1601]: time="2026-01-14T01:30:40.320296210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:40.323394 containerd[1601]: time="2026-01-14T01:30:40.323362238Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 14 01:30:40.326695 containerd[1601]: time="2026-01-14T01:30:40.326313926Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:40.333080 containerd[1601]: time="2026-01-14T01:30:40.332869525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:40.334844 containerd[1601]: time="2026-01-14T01:30:40.334095152Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 5.465858102s" Jan 14 01:30:40.334844 containerd[1601]: time="2026-01-14T01:30:40.334157908Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 14 01:30:40.341443 containerd[1601]: time="2026-01-14T01:30:40.340800881Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 01:30:44.058221 containerd[1601]: time="2026-01-14T01:30:44.057678248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:44.060453 containerd[1601]: time="2026-01-14T01:30:44.059234139Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 14 01:30:44.061498 containerd[1601]: time="2026-01-14T01:30:44.061395207Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:44.065856 containerd[1601]: time="2026-01-14T01:30:44.065807796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:44.068021 containerd[1601]: time="2026-01-14T01:30:44.067598394Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 3.726580881s" Jan 14 01:30:44.068021 containerd[1601]: time="2026-01-14T01:30:44.067792584Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 14 01:30:44.071648 containerd[1601]: time="2026-01-14T01:30:44.071560464Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 01:30:46.569471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146512170.mount: Deactivated successfully. Jan 14 01:30:48.848594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 01:30:48.856050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:30:48.860629 containerd[1601]: time="2026-01-14T01:30:48.860387948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:48.863071 containerd[1601]: time="2026-01-14T01:30:48.863028399Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 14 01:30:48.865382 containerd[1601]: time="2026-01-14T01:30:48.865334529Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:48.869399 containerd[1601]: time="2026-01-14T01:30:48.869355610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:48.870000 containerd[1601]: time="2026-01-14T01:30:48.869779641Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 4.798182679s" Jan 14 01:30:48.870000 containerd[1601]: time="2026-01-14T01:30:48.869823424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 14 01:30:48.874216 containerd[1601]: time="2026-01-14T01:30:48.874086022Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 01:30:49.290698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:30:49.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:49.311547 kernel: audit: type=1130 audit(1768354249.290:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:30:49.322827 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:30:49.586124 kubelet[2234]: E0114 01:30:49.583652 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:30:49.588837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:30:49.589192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:30:49.590697 systemd[1]: kubelet.service: Consumed 605ms CPU time, 108M memory peak. Jan 14 01:30:49.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:30:49.623083 kernel: audit: type=1131 audit(1768354249.589:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:30:49.802416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961065171.mount: Deactivated successfully. Jan 14 01:30:52.252236 containerd[1601]: time="2026-01-14T01:30:52.251516881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:52.255134 containerd[1601]: time="2026-01-14T01:30:52.255063166Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20213486" Jan 14 01:30:52.258521 containerd[1601]: time="2026-01-14T01:30:52.258460731Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:52.265043 containerd[1601]: time="2026-01-14T01:30:52.264828586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:30:52.266539 containerd[1601]: time="2026-01-14T01:30:52.266349912Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.392224285s" Jan 14 01:30:52.266539 containerd[1601]: time="2026-01-14T01:30:52.266474974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 14 01:30:52.271224 containerd[1601]: time="2026-01-14T01:30:52.271182508Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 01:30:52.810285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087580755.mount: Deactivated successfully. Jan 14 01:30:52.881216 containerd[1601]: time="2026-01-14T01:30:52.880562878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:30:52.883103 containerd[1601]: time="2026-01-14T01:30:52.882632630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:30:52.885006 containerd[1601]: time="2026-01-14T01:30:52.884833407Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:30:52.888663 containerd[1601]: time="2026-01-14T01:30:52.888479587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:30:52.890056 containerd[1601]: time="2026-01-14T01:30:52.889858522Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 618.629097ms" Jan 14 01:30:52.890056 containerd[1601]: time="2026-01-14T01:30:52.890002119Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 01:30:52.893596 containerd[1601]: time="2026-01-14T01:30:52.893556940Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 01:30:53.514678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911536686.mount: Deactivated successfully. Jan 14 01:30:59.600430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 01:30:59.609311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:31:00.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:00.134033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:31:00.153042 kernel: audit: type=1130 audit(1768354260.133:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:00.162586 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:31:00.499128 kubelet[2358]: E0114 01:31:00.494613 2358 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:31:00.506469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:31:00.507128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:31:00.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:31:00.509710 systemd[1]: kubelet.service: Consumed 750ms CPU time, 110.3M memory peak. Jan 14 01:31:00.531077 kernel: audit: type=1131 audit(1768354260.508:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:31:00.902050 containerd[1601]: time="2026-01-14T01:31:00.901723750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:00.904298 containerd[1601]: time="2026-01-14T01:31:00.904260647Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Jan 14 01:31:00.906378 containerd[1601]: time="2026-01-14T01:31:00.906113642Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:00.913360 containerd[1601]: time="2026-01-14T01:31:00.913146246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:00.915042 containerd[1601]: time="2026-01-14T01:31:00.914690927Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 8.021090186s" Jan 14 01:31:00.915042 containerd[1601]: time="2026-01-14T01:31:00.914855634Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 14 01:31:05.188407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:31:05.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:05.188716 systemd[1]: kubelet.service: Consumed 750ms CPU time, 110.3M memory peak. Jan 14 01:31:05.193228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:31:05.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:05.230493 kernel: audit: type=1130 audit(1768354265.187:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:05.230594 kernel: audit: type=1131 audit(1768354265.187:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:05.262120 systemd[1]: Reload requested from client PID 2400 ('systemctl') (unit session-10.scope)... Jan 14 01:31:05.262293 systemd[1]: Reloading... Jan 14 01:31:05.405078 zram_generator::config[2446]: No configuration found. Jan 14 01:31:05.702000 systemd[1]: Reloading finished in 439 ms. Jan 14 01:31:05.745000 audit: BPF prog-id=61 op=LOAD Jan 14 01:31:05.805128 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:31:05.805337 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:31:05.806249 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:31:05.806313 systemd[1]: kubelet.service: Consumed 190ms CPU time, 98.5M memory peak. Jan 14 01:31:05.811997 kernel: audit: type=1334 audit(1768354265.745:279): prog-id=61 op=LOAD Jan 14 01:31:05.809749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:31:05.745000 audit: BPF prog-id=47 op=UNLOAD Jan 14 01:31:05.746000 audit: BPF prog-id=62 op=LOAD Jan 14 01:31:05.872838 kernel: audit: type=1334 audit(1768354265.745:280): prog-id=47 op=UNLOAD Jan 14 01:31:05.872993 kernel: audit: type=1334 audit(1768354265.746:281): prog-id=62 op=LOAD Jan 14 01:31:05.746000 audit: BPF prog-id=63 op=LOAD Jan 14 01:31:05.880064 kernel: audit: type=1334 audit(1768354265.746:282): prog-id=63 op=LOAD Jan 14 01:31:05.880179 kernel: audit: type=1334 audit(1768354265.746:283): prog-id=48 op=UNLOAD Jan 14 01:31:05.746000 audit: BPF prog-id=48 op=UNLOAD Jan 14 01:31:05.746000 audit: BPF prog-id=49 op=UNLOAD Jan 14 01:31:05.890092 kernel: audit: type=1334 audit(1768354265.746:284): prog-id=49 op=UNLOAD Jan 14 01:31:05.890129 kernel: audit: type=1334 audit(1768354265.747:285): prog-id=64 op=LOAD Jan 14 01:31:05.747000 audit: BPF prog-id=64 op=LOAD Jan 14 01:31:05.895113 kernel: audit: type=1334 audit(1768354265.747:286): prog-id=57 op=UNLOAD Jan 14 01:31:05.747000 audit: BPF prog-id=57 op=UNLOAD Jan 14 01:31:05.748000 audit: BPF prog-id=65 op=LOAD Jan 14 01:31:05.748000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:31:05.748000 audit: BPF prog-id=66 op=LOAD Jan 14 01:31:05.748000 audit: BPF prog-id=67 op=LOAD Jan 14 01:31:05.748000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:31:05.748000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:31:05.751000 audit: BPF prog-id=68 op=LOAD Jan 14 01:31:05.751000 audit: BPF prog-id=58 op=UNLOAD Jan 14 01:31:05.751000 audit: BPF prog-id=69 op=LOAD Jan 14 01:31:05.751000 audit: BPF prog-id=70 op=LOAD Jan 14 01:31:05.751000 audit: BPF prog-id=59 op=UNLOAD Jan 14 01:31:05.751000 audit: BPF prog-id=60 op=UNLOAD Jan 14 01:31:05.752000 audit: BPF prog-id=71 op=LOAD Jan 14 01:31:05.753000 audit: BPF prog-id=72 op=LOAD Jan 14 01:31:05.753000 audit: BPF prog-id=54 op=UNLOAD Jan 14 01:31:05.753000 audit: BPF prog-id=55 op=UNLOAD Jan 14 01:31:05.754000 audit: BPF prog-id=73 op=LOAD Jan 14 01:31:05.754000 audit: BPF prog-id=51 op=UNLOAD Jan 14 01:31:05.754000 audit: BPF prog-id=74 op=LOAD Jan 14 01:31:05.754000 audit: BPF prog-id=75 op=LOAD Jan 14 01:31:05.754000 audit: BPF prog-id=52 op=UNLOAD Jan 14 01:31:05.754000 audit: BPF prog-id=53 op=UNLOAD Jan 14 01:31:05.755000 audit: BPF prog-id=76 op=LOAD Jan 14 01:31:05.755000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:31:05.755000 audit: BPF prog-id=77 op=LOAD Jan 14 01:31:05.755000 audit: BPF prog-id=78 op=LOAD Jan 14 01:31:05.755000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:31:05.756000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:31:05.759000 audit: BPF prog-id=79 op=LOAD Jan 14 01:31:05.759000 audit: BPF prog-id=50 op=UNLOAD Jan 14 01:31:05.760000 audit: BPF prog-id=80 op=LOAD Jan 14 01:31:05.760000 audit: BPF prog-id=56 op=UNLOAD Jan 14 01:31:05.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:31:06.157670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:31:06.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:06.178439 (kubelet)[2492]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:31:06.297445 kubelet[2492]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:31:06.297445 kubelet[2492]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:31:06.297445 kubelet[2492]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:31:06.298258 kubelet[2492]: I0114 01:31:06.298042 2492 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:31:06.730273 kubelet[2492]: I0114 01:31:06.730165 2492 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:31:06.730402 kubelet[2492]: I0114 01:31:06.730322 2492 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:31:06.731196 kubelet[2492]: I0114 01:31:06.731097 2492 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:31:06.782543 kubelet[2492]: E0114 01:31:06.782287 2492 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:31:06.794524 kubelet[2492]: I0114 01:31:06.794166 2492 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:31:06.817137 kubelet[2492]: I0114 01:31:06.817061 2492 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:31:06.830535 kubelet[2492]: I0114 01:31:06.830381 2492 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:31:06.831562 kubelet[2492]: I0114 01:31:06.831401 2492 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:31:06.832258 kubelet[2492]: I0114 01:31:06.831470 2492 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:31:06.832510 kubelet[2492]: I0114 01:31:06.832343 2492 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:31:06.832510 kubelet[2492]: I0114 01:31:06.832399 2492 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:31:06.834756 kubelet[2492]: I0114 01:31:06.834547 2492 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:31:06.839408 kubelet[2492]: I0114 01:31:06.838744 2492 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:31:06.839408 kubelet[2492]: I0114 01:31:06.839024 2492 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:31:06.839408 kubelet[2492]: I0114 01:31:06.839364 2492 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:31:06.839523 kubelet[2492]: I0114 01:31:06.839498 2492 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:31:06.851223 kubelet[2492]: E0114 01:31:06.850771 2492 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:31:06.854121 kubelet[2492]: E0114 01:31:06.853737 2492 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:31:06.855408 kubelet[2492]: I0114 01:31:06.855306 2492 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:31:06.873459 kubelet[2492]: I0114 01:31:06.873381 2492 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:31:06.877029 kubelet[2492]: W0114 01:31:06.876881 2492 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:31:06.897011 kubelet[2492]: I0114 01:31:06.896718 2492 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:31:06.897302 kubelet[2492]: I0114 01:31:06.897212 2492 server.go:1289] "Started kubelet" Jan 14 01:31:06.900512 kubelet[2492]: I0114 01:31:06.900091 2492 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:31:06.904978 kubelet[2492]: I0114 01:31:06.903789 2492 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:31:06.905324 kubelet[2492]: I0114 01:31:06.905195 2492 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:31:06.905324 kubelet[2492]: E0114 01:31:06.903608 2492 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a74d62b6f3183 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 01:31:06.896871811 +0000 UTC m=+0.709280742,LastTimestamp:2026-01-14 01:31:06.896871811 +0000 UTC m=+0.709280742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 01:31:06.908029 kubelet[2492]: I0114 01:31:06.907624 2492 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:31:06.909003 kubelet[2492]: I0114 01:31:06.908688 2492 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:31:06.909534 kubelet[2492]: E0114 01:31:06.909451 2492 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:31:06.909952 kubelet[2492]: I0114 01:31:06.909718 2492 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:31:06.910595 kubelet[2492]: I0114 01:31:06.910500 2492 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:31:06.910595 kubelet[2492]: I0114 01:31:06.910586 2492 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:31:06.912175 kubelet[2492]: I0114 01:31:06.911171 2492 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:31:06.914553 kubelet[2492]: E0114 01:31:06.914455 2492 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:31:06.914870 kubelet[2492]: I0114 01:31:06.914771 2492 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:31:06.915267 kubelet[2492]: I0114 01:31:06.915184 2492 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:31:06.915698 kubelet[2492]: E0114 01:31:06.914796 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Jan 14 01:31:06.917354 kubelet[2492]: E0114 01:31:06.917251 2492 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:31:06.920078 kubelet[2492]: I0114 01:31:06.919864 2492 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:31:06.927000 audit[2510]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:06.927000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff32cec6f0 a2=0 a3=0 items=0 ppid=2492 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:06.927000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:31:06.932000 audit[2511]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:06.932000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb69a8690 a2=0 a3=0 items=0 ppid=2492 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:06.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:31:06.944000 audit[2514]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:06.944000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff807f1ab0 a2=0 a3=0 items=0 ppid=2492 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:06.944000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:31:06.955000 audit[2518]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:06.955000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc06509290 a2=0 a3=0 items=0 ppid=2492 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:06.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:31:06.962423 kubelet[2492]: I0114 01:31:06.962228 2492 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:31:06.962423 kubelet[2492]: I0114 01:31:06.962248 2492 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:31:06.962545 kubelet[2492]: I0114 01:31:06.962533 2492 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:31:06.968748 kubelet[2492]: I0114 01:31:06.968596 2492 policy_none.go:49] "None policy: Start" Jan 14 01:31:06.968748 kubelet[2492]: I0114 01:31:06.968973 2492 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:31:06.969285 kubelet[2492]: I0114 01:31:06.969196 2492 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:31:06.976000 audit[2521]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:06.976000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcf74dccc0 a2=0 a3=0 items=0 ppid=2492 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:06.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 01:31:06.979080 kubelet[2492]: I0114 01:31:06.978505 2492 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:31:06.981000 audit[2523]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:06.981000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdcb47f540 a2=0 a3=0 items=0 ppid=2492 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:06.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:31:06.983000 audit[2524]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:06.983000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc40a8530 a2=0 a3=0 items=0 ppid=2492 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:06.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:31:06.992115 kubelet[2492]: I0114 01:31:06.984752 2492 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:31:06.992115 kubelet[2492]: I0114 01:31:06.985081 2492 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:31:06.992115 kubelet[2492]: I0114 01:31:06.985192 2492 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:31:06.992115 kubelet[2492]: I0114 01:31:06.985241 2492 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:31:06.992115 kubelet[2492]: E0114 01:31:06.985286 2492 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:31:06.992115 kubelet[2492]: E0114 01:31:06.986701 2492 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:31:06.991653 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:31:06.988000 audit[2526]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:06.988000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd521ec860 a2=0 a3=0 items=0 ppid=2492 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:06.988000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:31:06.989000 audit[2525]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:06.989000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb1c41fb0 a2=0 a3=0 items=0 ppid=2492 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:06.989000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:31:06.996000 audit[2527]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:06.996000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdddd2bf70 a2=0 a3=0 items=0 ppid=2492 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:06.996000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:31:06.999000 audit[2528]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:06.999000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd108fa150 a2=0 a3=0 items=0 ppid=2492 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:06.999000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:31:07.004000 audit[2529]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:07.004000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed9603010 a2=0 a3=0 items=0 ppid=2492 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:31:07.010745 kubelet[2492]: E0114 01:31:07.010632 2492 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:31:07.013560 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:31:07.031243 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:31:07.034555 kubelet[2492]: E0114 01:31:07.034419 2492 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:31:07.035438 kubelet[2492]: I0114 01:31:07.035120 2492 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:31:07.035438 kubelet[2492]: I0114 01:31:07.035312 2492 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:31:07.036764 kubelet[2492]: I0114 01:31:07.036543 2492 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:31:07.038446 kubelet[2492]: E0114 01:31:07.038099 2492 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:31:07.038446 kubelet[2492]: E0114 01:31:07.038427 2492 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 14 01:31:07.113301 kubelet[2492]: I0114 01:31:07.113145 2492 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57cdb531a45a580488b93d4f6ef0a992-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57cdb531a45a580488b93d4f6ef0a992\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:31:07.113301 kubelet[2492]: I0114 01:31:07.113244 2492 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:07.113301 kubelet[2492]: I0114 01:31:07.113268 2492 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:07.113301 kubelet[2492]: I0114 01:31:07.113288 2492 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:07.113301 kubelet[2492]: I0114 01:31:07.113309 2492 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:07.113523 kubelet[2492]: I0114 01:31:07.113330 2492 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 14 01:31:07.113523 kubelet[2492]: I0114 01:31:07.113353 2492 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57cdb531a45a580488b93d4f6ef0a992-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57cdb531a45a580488b93d4f6ef0a992\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:31:07.113523 kubelet[2492]: I0114 01:31:07.113378 2492 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57cdb531a45a580488b93d4f6ef0a992-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57cdb531a45a580488b93d4f6ef0a992\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:31:07.113523 kubelet[2492]: I0114 01:31:07.113472 2492 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:07.117387 systemd[1]: Created slice kubepods-burstable-pod57cdb531a45a580488b93d4f6ef0a992.slice - libcontainer container kubepods-burstable-pod57cdb531a45a580488b93d4f6ef0a992.slice. Jan 14 01:31:07.118100 kubelet[2492]: E0114 01:31:07.117441 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Jan 14 01:31:07.137123 kubelet[2492]: E0114 01:31:07.136757 2492 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:31:07.138399 kubelet[2492]: I0114 01:31:07.138250 2492 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:31:07.140448 kubelet[2492]: E0114 01:31:07.140353 2492 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 14 01:31:07.141774 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 14 01:31:07.148057 kubelet[2492]: E0114 01:31:07.147759 2492 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:31:07.152586 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 14 01:31:07.158662 kubelet[2492]: E0114 01:31:07.158527 2492 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:31:07.343662 kubelet[2492]: I0114 01:31:07.343559 2492 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:31:07.344242 kubelet[2492]: E0114 01:31:07.344045 2492 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 14 01:31:07.439456 kubelet[2492]: E0114 01:31:07.439352 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:07.441730 containerd[1601]: time="2026-01-14T01:31:07.441436926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57cdb531a45a580488b93d4f6ef0a992,Namespace:kube-system,Attempt:0,}" Jan 14 01:31:07.450347 kubelet[2492]: E0114 01:31:07.450165 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:07.451420 containerd[1601]: time="2026-01-14T01:31:07.451135621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 14 01:31:07.460004 kubelet[2492]: E0114 01:31:07.459640 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:07.460959 containerd[1601]: time="2026-01-14T01:31:07.460548895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 14 01:31:07.516027 containerd[1601]: time="2026-01-14T01:31:07.515037124Z" level=info msg="connecting to shim 16bd3a8a77d67e75beca2feaa168b7e0eb2ffec728917f0708bac6bdac88bb8c" address="unix:///run/containerd/s/335c0395e8dc660f75df5d66cdc0b90e3dafa88db8250c285c9a776991820b69" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:31:07.519478 kubelet[2492]: E0114 01:31:07.519382 2492 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Jan 14 01:31:07.558466 containerd[1601]: time="2026-01-14T01:31:07.558265403Z" level=info msg="connecting to shim d07dbf2cbce6bc8939cdb6278e59760160c2129ef2b4dd10fa0109572f403fd7" address="unix:///run/containerd/s/c03d001598ba6adf68b9f3b7b864072e755d971460c19c3edcd7fca9095eaf18" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:31:07.562545 containerd[1601]: time="2026-01-14T01:31:07.562254300Z" level=info msg="connecting to shim d84f8b95956d22fbd4ab64e87fffc4dc508213bf8cea667ec4befcce5a541504" address="unix:///run/containerd/s/d542b56a52832c3b702652235264712354feae3b0e043f71720953e426a47913" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:31:07.629531 systemd[1]: Started cri-containerd-d07dbf2cbce6bc8939cdb6278e59760160c2129ef2b4dd10fa0109572f403fd7.scope - libcontainer container d07dbf2cbce6bc8939cdb6278e59760160c2129ef2b4dd10fa0109572f403fd7. Jan 14 01:31:07.639756 systemd[1]: Started cri-containerd-16bd3a8a77d67e75beca2feaa168b7e0eb2ffec728917f0708bac6bdac88bb8c.scope - libcontainer container 16bd3a8a77d67e75beca2feaa168b7e0eb2ffec728917f0708bac6bdac88bb8c. Jan 14 01:31:07.651281 systemd[1]: Started cri-containerd-d84f8b95956d22fbd4ab64e87fffc4dc508213bf8cea667ec4befcce5a541504.scope - libcontainer container d84f8b95956d22fbd4ab64e87fffc4dc508213bf8cea667ec4befcce5a541504. Jan 14 01:31:07.675000 audit: BPF prog-id=81 op=LOAD Jan 14 01:31:07.677000 audit: BPF prog-id=82 op=LOAD Jan 14 01:31:07.677000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2556 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430376462663263626365366263383933396364623632373865353937 Jan 14 01:31:07.677000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:31:07.677000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430376462663263626365366263383933396364623632373865353937 Jan 14 01:31:07.677000 audit: BPF prog-id=83 op=LOAD Jan 14 01:31:07.677000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2556 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430376462663263626365366263383933396364623632373865353937 Jan 14 01:31:07.677000 audit: BPF prog-id=84 op=LOAD Jan 14 01:31:07.677000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2556 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430376462663263626365366263383933396364623632373865353937 Jan 14 01:31:07.678000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:31:07.678000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430376462663263626365366263383933396364623632373865353937 Jan 14 01:31:07.678000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:31:07.678000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430376462663263626365366263383933396364623632373865353937 Jan 14 01:31:07.678000 audit: BPF prog-id=85 op=LOAD Jan 14 01:31:07.678000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2556 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430376462663263626365366263383933396364623632373865353937 Jan 14 01:31:07.683000 audit: BPF prog-id=86 op=LOAD Jan 14 01:31:07.683000 audit: BPF prog-id=87 op=LOAD Jan 14 01:31:07.683000 audit[2582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2539 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.683000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136626433613861373764363765373562656361326665616131363862 Jan 14 01:31:07.684000 audit: BPF prog-id=87 op=UNLOAD Jan 14 01:31:07.684000 audit[2582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136626433613861373764363765373562656361326665616131363862 Jan 14 01:31:07.684000 audit: BPF prog-id=88 op=LOAD Jan 14 01:31:07.684000 audit[2582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2539 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136626433613861373764363765373562656361326665616131363862 Jan 14 01:31:07.684000 audit: BPF prog-id=89 op=LOAD Jan 14 01:31:07.684000 audit[2582]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2539 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136626433613861373764363765373562656361326665616131363862 Jan 14 01:31:07.684000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:31:07.684000 audit[2582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136626433613861373764363765373562656361326665616131363862 Jan 14 01:31:07.684000 audit: BPF prog-id=88 op=UNLOAD Jan 14 01:31:07.684000 audit[2582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136626433613861373764363765373562656361326665616131363862 Jan 14 01:31:07.685000 audit: BPF prog-id=90 op=LOAD Jan 14 01:31:07.685000 audit[2582]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2539 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136626433613861373764363765373562656361326665616131363862 Jan 14 01:31:07.689000 audit: BPF prog-id=91 op=LOAD Jan 14 01:31:07.690000 audit: BPF prog-id=92 op=LOAD Jan 14 01:31:07.690000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438346638623935393536643232666264346162363465383766666663 Jan 14 01:31:07.691000 audit: BPF prog-id=92 op=UNLOAD Jan 14 01:31:07.691000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438346638623935393536643232666264346162363465383766666663 Jan 14 01:31:07.692000 audit: BPF prog-id=93 op=LOAD Jan 14 01:31:07.692000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438346638623935393536643232666264346162363465383766666663 Jan 14 01:31:07.694000 audit: BPF prog-id=94 op=LOAD Jan 14 01:31:07.694000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438346638623935393536643232666264346162363465383766666663 Jan 14 01:31:07.694000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:31:07.694000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438346638623935393536643232666264346162363465383766666663 Jan 14 01:31:07.694000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:31:07.694000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438346638623935393536643232666264346162363465383766666663 Jan 14 01:31:07.694000 audit: BPF prog-id=95 op=LOAD Jan 14 01:31:07.694000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2575 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.694000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438346638623935393536643232666264346162363465383766666663 Jan 14 01:31:07.754186 kubelet[2492]: I0114 01:31:07.754096 2492 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:31:07.754550 kubelet[2492]: E0114 01:31:07.754490 2492 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Jan 14 01:31:07.782866 containerd[1601]: time="2026-01-14T01:31:07.782762982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d07dbf2cbce6bc8939cdb6278e59760160c2129ef2b4dd10fa0109572f403fd7\"" Jan 14 01:31:07.783537 containerd[1601]: time="2026-01-14T01:31:07.783476543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57cdb531a45a580488b93d4f6ef0a992,Namespace:kube-system,Attempt:0,} returns sandbox id \"16bd3a8a77d67e75beca2feaa168b7e0eb2ffec728917f0708bac6bdac88bb8c\"" Jan 14 01:31:07.785714 kubelet[2492]: E0114 01:31:07.785657 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:07.788385 kubelet[2492]: E0114 01:31:07.788332 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:07.806550 containerd[1601]: time="2026-01-14T01:31:07.806353572Z" level=info msg="CreateContainer within sandbox \"16bd3a8a77d67e75beca2feaa168b7e0eb2ffec728917f0708bac6bdac88bb8c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:31:07.811736 containerd[1601]: time="2026-01-14T01:31:07.811601247Z" level=info msg="CreateContainer within sandbox \"d07dbf2cbce6bc8939cdb6278e59760160c2129ef2b4dd10fa0109572f403fd7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:31:07.814416 containerd[1601]: time="2026-01-14T01:31:07.814244382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d84f8b95956d22fbd4ab64e87fffc4dc508213bf8cea667ec4befcce5a541504\"" Jan 14 01:31:07.816102 kubelet[2492]: E0114 01:31:07.815674 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:07.826096 containerd[1601]: time="2026-01-14T01:31:07.825699257Z" level=info msg="CreateContainer within sandbox \"d84f8b95956d22fbd4ab64e87fffc4dc508213bf8cea667ec4befcce5a541504\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:31:07.834103 containerd[1601]: time="2026-01-14T01:31:07.834064574Z" level=info msg="Container 5ebb1f97b4582c6b982ebc173e83d37243f3976e7b3c526895e2e61938e10cbc: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:31:07.843731 containerd[1601]: time="2026-01-14T01:31:07.843632001Z" level=info msg="Container 6bd75979efe1055caa1d610502c51e5102a4d3adb94f2d19067819526165384f: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:31:07.856721 containerd[1601]: time="2026-01-14T01:31:07.856571745Z" level=info msg="CreateContainer within sandbox \"16bd3a8a77d67e75beca2feaa168b7e0eb2ffec728917f0708bac6bdac88bb8c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5ebb1f97b4582c6b982ebc173e83d37243f3976e7b3c526895e2e61938e10cbc\"" Jan 14 01:31:07.858983 containerd[1601]: time="2026-01-14T01:31:07.858700513Z" level=info msg="StartContainer for \"5ebb1f97b4582c6b982ebc173e83d37243f3976e7b3c526895e2e61938e10cbc\"" Jan 14 01:31:07.861511 containerd[1601]: time="2026-01-14T01:31:07.861468885Z" level=info msg="connecting to shim 5ebb1f97b4582c6b982ebc173e83d37243f3976e7b3c526895e2e61938e10cbc" address="unix:///run/containerd/s/335c0395e8dc660f75df5d66cdc0b90e3dafa88db8250c285c9a776991820b69" protocol=ttrpc version=3 Jan 14 01:31:07.867663 containerd[1601]: time="2026-01-14T01:31:07.867629673Z" level=info msg="Container 64997eba086ebfb16795481c438472f06909695672dfdd7fa725b03024a54720: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:31:07.871747 containerd[1601]: time="2026-01-14T01:31:07.871523348Z" level=info msg="CreateContainer within sandbox \"d07dbf2cbce6bc8939cdb6278e59760160c2129ef2b4dd10fa0109572f403fd7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6bd75979efe1055caa1d610502c51e5102a4d3adb94f2d19067819526165384f\"" Jan 14 01:31:07.873246 containerd[1601]: time="2026-01-14T01:31:07.873124846Z" level=info msg="StartContainer for \"6bd75979efe1055caa1d610502c51e5102a4d3adb94f2d19067819526165384f\"" Jan 14 01:31:07.875373 containerd[1601]: time="2026-01-14T01:31:07.875306135Z" level=info msg="connecting to shim 6bd75979efe1055caa1d610502c51e5102a4d3adb94f2d19067819526165384f" address="unix:///run/containerd/s/c03d001598ba6adf68b9f3b7b864072e755d971460c19c3edcd7fca9095eaf18" protocol=ttrpc version=3 Jan 14 01:31:07.881767 containerd[1601]: time="2026-01-14T01:31:07.881592304Z" level=info msg="CreateContainer within sandbox \"d84f8b95956d22fbd4ab64e87fffc4dc508213bf8cea667ec4befcce5a541504\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"64997eba086ebfb16795481c438472f06909695672dfdd7fa725b03024a54720\"" Jan 14 01:31:07.885358 containerd[1601]: time="2026-01-14T01:31:07.885328411Z" level=info msg="StartContainer for \"64997eba086ebfb16795481c438472f06909695672dfdd7fa725b03024a54720\"" Jan 14 01:31:07.888179 containerd[1601]: time="2026-01-14T01:31:07.888147527Z" level=info msg="connecting to shim 64997eba086ebfb16795481c438472f06909695672dfdd7fa725b03024a54720" address="unix:///run/containerd/s/d542b56a52832c3b702652235264712354feae3b0e043f71720953e426a47913" protocol=ttrpc version=3 Jan 14 01:31:07.907249 systemd[1]: Started cri-containerd-5ebb1f97b4582c6b982ebc173e83d37243f3976e7b3c526895e2e61938e10cbc.scope - libcontainer container 5ebb1f97b4582c6b982ebc173e83d37243f3976e7b3c526895e2e61938e10cbc. Jan 14 01:31:07.921496 systemd[1]: Started cri-containerd-6bd75979efe1055caa1d610502c51e5102a4d3adb94f2d19067819526165384f.scope - libcontainer container 6bd75979efe1055caa1d610502c51e5102a4d3adb94f2d19067819526165384f. Jan 14 01:31:07.943000 audit: BPF prog-id=96 op=LOAD Jan 14 01:31:07.946000 audit: BPF prog-id=97 op=LOAD Jan 14 01:31:07.946000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2539 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626231663937623435383263366239383265626331373365383364 Jan 14 01:31:07.946000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:31:07.946000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626231663937623435383263366239383265626331373365383364 Jan 14 01:31:07.947000 audit: BPF prog-id=98 op=LOAD Jan 14 01:31:07.947000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2539 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626231663937623435383263366239383265626331373365383364 Jan 14 01:31:07.949000 audit: BPF prog-id=99 op=LOAD Jan 14 01:31:07.949000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2539 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626231663937623435383263366239383265626331373365383364 Jan 14 01:31:07.949000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:31:07.949000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626231663937623435383263366239383265626331373365383364 Jan 14 01:31:07.949000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:31:07.949000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2539 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626231663937623435383263366239383265626331373365383364 Jan 14 01:31:07.949000 audit: BPF prog-id=100 op=LOAD Jan 14 01:31:07.949000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2539 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565626231663937623435383263366239383265626331373365383364 Jan 14 01:31:07.958000 audit: BPF prog-id=101 op=LOAD Jan 14 01:31:07.960000 audit: BPF prog-id=102 op=LOAD Jan 14 01:31:07.960000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2556 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643735393739656665313035356361613164363130353032633531 Jan 14 01:31:07.960000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:31:07.960000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643735393739656665313035356361613164363130353032633531 Jan 14 01:31:07.961000 audit: BPF prog-id=103 op=LOAD Jan 14 01:31:07.961000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2556 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643735393739656665313035356361613164363130353032633531 Jan 14 01:31:07.961000 audit: BPF prog-id=104 op=LOAD Jan 14 01:31:07.961000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2556 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643735393739656665313035356361613164363130353032633531 Jan 14 01:31:07.961000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:31:07.961000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643735393739656665313035356361613164363130353032633531 Jan 14 01:31:07.961000 audit: BPF prog-id=103 op=UNLOAD Jan 14 01:31:07.961000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2556 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643735393739656665313035356361613164363130353032633531 Jan 14 01:31:07.961000 audit: BPF prog-id=105 op=LOAD Jan 14 01:31:07.961000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2556 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:07.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643735393739656665313035356361613164363130353032633531 Jan 14 01:31:07.988218 kubelet[2492]: E0114 01:31:07.988172 2492 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:31:07.990531 systemd[1]: Started cri-containerd-64997eba086ebfb16795481c438472f06909695672dfdd7fa725b03024a54720.scope - libcontainer container 64997eba086ebfb16795481c438472f06909695672dfdd7fa725b03024a54720. Jan 14 01:31:08.073394 containerd[1601]: time="2026-01-14T01:31:08.073280547Z" level=info msg="StartContainer for \"5ebb1f97b4582c6b982ebc173e83d37243f3976e7b3c526895e2e61938e10cbc\" returns successfully" Jan 14 01:31:08.077000 audit: BPF prog-id=106 op=LOAD Jan 14 01:31:08.079000 audit: BPF prog-id=107 op=LOAD Jan 14 01:31:08.079000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2575 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:08.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393937656261303836656266623136373935343831633433383437 Jan 14 01:31:08.079000 audit: BPF prog-id=107 op=UNLOAD Jan 14 01:31:08.079000 audit[2690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:08.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393937656261303836656266623136373935343831633433383437 Jan 14 01:31:08.080000 audit: BPF prog-id=108 op=LOAD Jan 14 01:31:08.080000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2575 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:08.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393937656261303836656266623136373935343831633433383437 Jan 14 01:31:08.082000 audit: BPF prog-id=109 op=LOAD Jan 14 01:31:08.082000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2575 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:08.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393937656261303836656266623136373935343831633433383437 Jan 14 01:31:08.082000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:31:08.082000 audit[2690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:08.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393937656261303836656266623136373935343831633433383437 Jan 14 01:31:08.082000 audit: BPF prog-id=108 op=UNLOAD Jan 14 01:31:08.082000 audit[2690]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:08.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393937656261303836656266623136373935343831633433383437 Jan 14 01:31:08.082000 audit: BPF prog-id=110 op=LOAD Jan 14 01:31:08.082000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2575 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:08.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393937656261303836656266623136373935343831633433383437 Jan 14 01:31:08.102018 containerd[1601]: time="2026-01-14T01:31:08.101253521Z" level=info msg="StartContainer for \"6bd75979efe1055caa1d610502c51e5102a4d3adb94f2d19067819526165384f\" returns successfully" Jan 14 01:31:08.172241 kubelet[2492]: E0114 01:31:08.169033 2492 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:31:08.207017 containerd[1601]: time="2026-01-14T01:31:08.206206760Z" level=info msg="StartContainer for \"64997eba086ebfb16795481c438472f06909695672dfdd7fa725b03024a54720\" returns successfully" Jan 14 01:31:08.220638 kubelet[2492]: E0114 01:31:08.220541 2492 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:31:08.561800 kubelet[2492]: I0114 01:31:08.561340 2492 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:31:09.044054 kubelet[2492]: E0114 01:31:09.043315 2492 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:31:09.044054 kubelet[2492]: E0114 01:31:09.043535 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:09.044431 kubelet[2492]: E0114 01:31:09.044328 2492 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:31:09.044612 kubelet[2492]: E0114 01:31:09.044511 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:09.052565 kubelet[2492]: E0114 01:31:09.052466 2492 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:31:09.052736 kubelet[2492]: E0114 01:31:09.052652 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:10.059020 kubelet[2492]: E0114 01:31:10.057462 2492 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:31:10.059020 kubelet[2492]: E0114 01:31:10.057618 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:10.059020 kubelet[2492]: E0114 01:31:10.058159 2492 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:31:10.059020 kubelet[2492]: E0114 01:31:10.058259 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:10.061783 kubelet[2492]: E0114 01:31:10.061683 2492 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:31:10.065427 kubelet[2492]: E0114 01:31:10.065340 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:10.691558 kubelet[2492]: E0114 01:31:10.691427 2492 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 14 01:31:10.796387 kubelet[2492]: I0114 01:31:10.796255 2492 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 01:31:10.796387 kubelet[2492]: E0114 01:31:10.796370 2492 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 14 01:31:10.845393 kubelet[2492]: E0114 01:31:10.845060 2492 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:31:10.945617 kubelet[2492]: E0114 01:31:10.945412 2492 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:31:11.046142 kubelet[2492]: E0114 01:31:11.045989 2492 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:31:11.076106 kubelet[2492]: E0114 01:31:11.057552 2492 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:31:11.076106 kubelet[2492]: E0114 01:31:11.058133 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:11.076106 kubelet[2492]: E0114 01:31:11.058196 2492 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:31:11.076106 kubelet[2492]: E0114 01:31:11.058342 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:11.114457 kubelet[2492]: I0114 01:31:11.114400 2492 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:31:11.131010 kubelet[2492]: E0114 01:31:11.130059 2492 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 01:31:11.131010 kubelet[2492]: I0114 01:31:11.130085 2492 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:11.132783 kubelet[2492]: E0114 01:31:11.132696 2492 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:11.132783 kubelet[2492]: I0114 01:31:11.132720 2492 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:31:11.134682 kubelet[2492]: E0114 01:31:11.134662 2492 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 01:31:11.848398 kubelet[2492]: I0114 01:31:11.848038 2492 apiserver.go:52] "Watching apiserver" Jan 14 01:31:11.912440 kubelet[2492]: I0114 01:31:11.912285 2492 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:31:12.056493 kubelet[2492]: I0114 01:31:12.056387 2492 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:31:12.083048 kubelet[2492]: E0114 01:31:12.082738 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:13.061502 kubelet[2492]: E0114 01:31:13.061268 2492 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:14.184557 systemd[1]: Reload requested from client PID 2778 ('systemctl') (unit session-10.scope)... Jan 14 01:31:14.184637 systemd[1]: Reloading... Jan 14 01:31:14.441094 zram_generator::config[2824]: No configuration found. Jan 14 01:31:14.834675 systemd[1]: Reloading finished in 649 ms. Jan 14 01:31:14.912684 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:31:14.925210 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:31:14.926118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:31:14.926386 systemd[1]: kubelet.service: Consumed 2.053s CPU time, 129.1M memory peak. Jan 14 01:31:14.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:14.951575 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 01:31:14.951650 kernel: audit: type=1131 audit(1768354274.925:381): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:14.932308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:31:14.932000 audit: BPF prog-id=111 op=LOAD Jan 14 01:31:14.932000 audit: BPF prog-id=65 op=UNLOAD Jan 14 01:31:14.962142 kernel: audit: type=1334 audit(1768354274.932:382): prog-id=111 op=LOAD Jan 14 01:31:14.962191 kernel: audit: type=1334 audit(1768354274.932:383): prog-id=65 op=UNLOAD Jan 14 01:31:14.962309 kernel: audit: type=1334 audit(1768354274.932:384): prog-id=112 op=LOAD Jan 14 01:31:14.932000 audit: BPF prog-id=112 op=LOAD Jan 14 01:31:14.932000 audit: BPF prog-id=113 op=LOAD Jan 14 01:31:14.972142 kernel: audit: type=1334 audit(1768354274.932:385): prog-id=113 op=LOAD Jan 14 01:31:14.972273 kernel: audit: type=1334 audit(1768354274.932:386): prog-id=66 op=UNLOAD Jan 14 01:31:14.932000 audit: BPF prog-id=66 op=UNLOAD Jan 14 01:31:14.977295 kernel: audit: type=1334 audit(1768354274.932:387): prog-id=67 op=UNLOAD Jan 14 01:31:14.932000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:31:14.982154 kernel: audit: type=1334 audit(1768354274.935:388): prog-id=114 op=LOAD Jan 14 01:31:14.935000 audit: BPF prog-id=114 op=LOAD Jan 14 01:31:14.935000 audit: BPF prog-id=64 op=UNLOAD Jan 14 01:31:14.992644 kernel: audit: type=1334 audit(1768354274.935:389): prog-id=64 op=UNLOAD Jan 14 01:31:14.992699 kernel: audit: type=1334 audit(1768354274.937:390): prog-id=115 op=LOAD Jan 14 01:31:14.937000 audit: BPF prog-id=115 op=LOAD Jan 14 01:31:14.937000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:31:14.939000 audit: BPF prog-id=116 op=LOAD Jan 14 01:31:14.939000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:31:14.940000 audit: BPF prog-id=117 op=LOAD Jan 14 01:31:14.940000 audit: BPF prog-id=61 op=UNLOAD Jan 14 01:31:14.940000 audit: BPF prog-id=118 op=LOAD Jan 14 01:31:14.940000 audit: BPF prog-id=119 op=LOAD Jan 14 01:31:14.940000 audit: BPF prog-id=62 op=UNLOAD Jan 14 01:31:14.940000 audit: BPF prog-id=63 op=UNLOAD Jan 14 01:31:14.942000 audit: BPF prog-id=120 op=LOAD Jan 14 01:31:14.942000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:31:14.942000 audit: BPF prog-id=121 op=LOAD Jan 14 01:31:14.942000 audit: BPF prog-id=122 op=LOAD Jan 14 01:31:14.942000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:31:14.942000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:31:14.944000 audit: BPF prog-id=123 op=LOAD Jan 14 01:31:14.944000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:31:14.944000 audit: BPF prog-id=124 op=LOAD Jan 14 01:31:14.944000 audit: BPF prog-id=125 op=LOAD Jan 14 01:31:14.944000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:31:14.944000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:31:14.948000 audit: BPF prog-id=126 op=LOAD Jan 14 01:31:14.948000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:31:14.948000 audit: BPF prog-id=127 op=LOAD Jan 14 01:31:14.948000 audit: BPF prog-id=128 op=LOAD Jan 14 01:31:14.948000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:31:14.948000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:31:14.948000 audit: BPF prog-id=129 op=LOAD Jan 14 01:31:14.948000 audit: BPF prog-id=130 op=LOAD Jan 14 01:31:14.948000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:31:14.948000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:31:15.304724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:31:15.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:15.323522 (kubelet)[2869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:31:15.694524 kubelet[2869]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:31:15.694524 kubelet[2869]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:31:15.694524 kubelet[2869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:31:15.694524 kubelet[2869]: I0114 01:31:15.694453 2869 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:31:15.888218 kubelet[2869]: I0114 01:31:15.887101 2869 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:31:15.888218 kubelet[2869]: I0114 01:31:15.887139 2869 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:31:15.908434 kubelet[2869]: I0114 01:31:15.907683 2869 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:31:15.966144 kubelet[2869]: I0114 01:31:15.945133 2869 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 01:31:16.028660 kubelet[2869]: I0114 01:31:16.027360 2869 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:31:16.374102 kubelet[2869]: I0114 01:31:16.361548 2869 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:31:16.523373 kubelet[2869]: I0114 01:31:16.522383 2869 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:31:16.541774 kubelet[2869]: I0114 01:31:16.524416 2869 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:31:16.543716 kubelet[2869]: I0114 01:31:16.528360 2869 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:31:16.545715 kubelet[2869]: I0114 01:31:16.545315 2869 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:31:16.545715 kubelet[2869]: I0114 01:31:16.545700 2869 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:31:16.548047 kubelet[2869]: I0114 01:31:16.547798 2869 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:31:16.565075 kubelet[2869]: I0114 01:31:16.564668 2869 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:31:16.565075 kubelet[2869]: I0114 01:31:16.565164 2869 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:31:16.566435 kubelet[2869]: I0114 01:31:16.565837 2869 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:31:16.566435 kubelet[2869]: I0114 01:31:16.566085 2869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:31:16.609395 kubelet[2869]: I0114 01:31:16.608667 2869 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:31:16.628369 kubelet[2869]: I0114 01:31:16.624273 2869 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:31:16.682346 kubelet[2869]: I0114 01:31:16.682266 2869 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:31:16.682622 kubelet[2869]: I0114 01:31:16.682484 2869 server.go:1289] "Started kubelet" Jan 14 01:31:16.682842 kubelet[2869]: I0114 01:31:16.682805 2869 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:31:16.690767 kubelet[2869]: I0114 01:31:16.690630 2869 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:31:16.692588 kubelet[2869]: I0114 01:31:16.692226 2869 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:31:16.693783 kubelet[2869]: I0114 01:31:16.693615 2869 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:31:16.829625 kubelet[2869]: I0114 01:31:16.829388 2869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:31:16.843177 kubelet[2869]: I0114 01:31:16.840663 2869 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:31:16.897829 kubelet[2869]: E0114 01:31:16.896317 2869 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:31:16.897829 kubelet[2869]: I0114 01:31:16.896439 2869 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:31:16.897829 kubelet[2869]: I0114 01:31:16.897287 2869 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:31:16.897829 kubelet[2869]: I0114 01:31:16.897682 2869 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:31:16.898571 kubelet[2869]: E0114 01:31:16.898507 2869 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:31:16.973748 kubelet[2869]: I0114 01:31:16.973650 2869 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:31:16.976689 kubelet[2869]: I0114 01:31:16.976443 2869 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:31:16.985812 kubelet[2869]: I0114 01:31:16.985583 2869 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:31:17.079083 kubelet[2869]: I0114 01:31:17.078140 2869 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:31:17.094544 kubelet[2869]: I0114 01:31:17.094291 2869 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:31:17.094544 kubelet[2869]: I0114 01:31:17.094451 2869 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:31:17.094544 kubelet[2869]: I0114 01:31:17.094552 2869 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:31:17.094544 kubelet[2869]: I0114 01:31:17.094605 2869 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:31:17.095178 kubelet[2869]: E0114 01:31:17.094707 2869 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:31:17.210363 kubelet[2869]: E0114 01:31:17.208360 2869 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 01:31:17.416338 kubelet[2869]: E0114 01:31:17.416194 2869 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 01:31:17.596138 kubelet[2869]: I0114 01:31:17.594409 2869 apiserver.go:52] "Watching apiserver" Jan 14 01:31:17.675095 kubelet[2869]: I0114 01:31:17.673639 2869 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:31:17.675095 kubelet[2869]: I0114 01:31:17.673844 2869 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:31:17.675095 kubelet[2869]: I0114 01:31:17.674742 2869 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:31:17.676326 kubelet[2869]: I0114 01:31:17.675760 2869 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:31:17.676326 kubelet[2869]: I0114 01:31:17.675849 2869 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:31:17.676326 kubelet[2869]: I0114 01:31:17.676132 2869 policy_none.go:49] "None policy: Start" Jan 14 01:31:17.676326 kubelet[2869]: I0114 01:31:17.676266 2869 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:31:17.676326 kubelet[2869]: I0114 01:31:17.676288 2869 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:31:17.676695 kubelet[2869]: I0114 01:31:17.676564 2869 state_mem.go:75] "Updated machine memory state" Jan 14 01:31:17.820381 kubelet[2869]: E0114 01:31:17.818360 2869 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 01:31:17.869713 kubelet[2869]: E0114 01:31:17.866621 2869 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:31:17.869713 kubelet[2869]: I0114 01:31:17.868127 2869 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:31:17.869713 kubelet[2869]: I0114 01:31:17.868181 2869 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:31:17.895347 kubelet[2869]: I0114 01:31:17.871483 2869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:31:18.175036 kubelet[2869]: E0114 01:31:18.171248 2869 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:31:18.782213 kubelet[2869]: I0114 01:31:18.779801 2869 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:18.799813 kubelet[2869]: I0114 01:31:18.729144 2869 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:31:18.810521 kubelet[2869]: I0114 01:31:18.810093 2869 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:31:18.811320 kubelet[2869]: I0114 01:31:18.811149 2869 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:31:18.813803 kubelet[2869]: I0114 01:31:18.813152 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:18.813803 kubelet[2869]: I0114 01:31:18.813219 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:18.813803 kubelet[2869]: I0114 01:31:18.813259 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:18.813803 kubelet[2869]: I0114 01:31:18.813293 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:18.813803 kubelet[2869]: I0114 01:31:18.813345 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57cdb531a45a580488b93d4f6ef0a992-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57cdb531a45a580488b93d4f6ef0a992\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:31:18.814575 kubelet[2869]: I0114 01:31:18.813367 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57cdb531a45a580488b93d4f6ef0a992-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57cdb531a45a580488b93d4f6ef0a992\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:31:18.814575 kubelet[2869]: I0114 01:31:18.813388 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57cdb531a45a580488b93d4f6ef0a992-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57cdb531a45a580488b93d4f6ef0a992\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:31:18.814575 kubelet[2869]: I0114 01:31:18.813455 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 14 01:31:18.814575 kubelet[2869]: I0114 01:31:18.813482 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:31:19.098423 kubelet[2869]: E0114 01:31:19.085800 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:19.275464 kubelet[2869]: E0114 01:31:19.112593 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:19.275464 kubelet[2869]: E0114 01:31:19.113624 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:19.286870 kubelet[2869]: I0114 01:31:19.286613 2869 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 14 01:31:19.287641 kubelet[2869]: I0114 01:31:19.287490 2869 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 01:31:20.102046 kubelet[2869]: E0114 01:31:20.101596 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:20.110192 kubelet[2869]: E0114 01:31:20.108447 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:20.115546 kubelet[2869]: E0114 01:31:20.115502 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:20.147120 kubelet[2869]: I0114 01:31:20.146485 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=8.146259236 podStartE2EDuration="8.146259236s" podCreationTimestamp="2026-01-14 01:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:31:20.080355372 +0000 UTC m=+4.729626963" watchObservedRunningTime="2026-01-14 01:31:20.146259236 +0000 UTC m=+4.795530817" Jan 14 01:31:20.147451 kubelet[2869]: I0114 01:31:20.147362 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.147355712 podStartE2EDuration="2.147355712s" podCreationTimestamp="2026-01-14 01:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:31:20.14720433 +0000 UTC m=+4.796475911" watchObservedRunningTime="2026-01-14 01:31:20.147355712 +0000 UTC m=+4.796627292" Jan 14 01:31:20.379806 kubelet[2869]: I0114 01:31:20.371699 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.371567491 podStartE2EDuration="2.371567491s" podCreationTimestamp="2026-01-14 01:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:31:20.240581418 +0000 UTC m=+4.889852999" watchObservedRunningTime="2026-01-14 01:31:20.371567491 +0000 UTC m=+5.020839082" Jan 14 01:31:20.831411 kubelet[2869]: I0114 01:31:20.830619 2869 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:31:20.837188 containerd[1601]: time="2026-01-14T01:31:20.836851390Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:31:20.970170 kubelet[2869]: I0114 01:31:20.969510 2869 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:31:21.192393 kubelet[2869]: E0114 01:31:21.188337 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:21.236553 kubelet[2869]: E0114 01:31:21.236231 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:21.682075 kubelet[2869]: I0114 01:31:21.681262 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/77877b54-4e6d-4431-9d9b-2dc5835fdd20-kube-proxy\") pod \"kube-proxy-5h7sl\" (UID: \"77877b54-4e6d-4431-9d9b-2dc5835fdd20\") " pod="kube-system/kube-proxy-5h7sl" Jan 14 01:31:21.693446 kubelet[2869]: I0114 01:31:21.693133 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77877b54-4e6d-4431-9d9b-2dc5835fdd20-xtables-lock\") pod \"kube-proxy-5h7sl\" (UID: \"77877b54-4e6d-4431-9d9b-2dc5835fdd20\") " pod="kube-system/kube-proxy-5h7sl" Jan 14 01:31:21.694802 kubelet[2869]: I0114 01:31:21.693874 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77877b54-4e6d-4431-9d9b-2dc5835fdd20-lib-modules\") pod \"kube-proxy-5h7sl\" (UID: \"77877b54-4e6d-4431-9d9b-2dc5835fdd20\") " pod="kube-system/kube-proxy-5h7sl" Jan 14 01:31:21.721112 kubelet[2869]: I0114 01:31:21.715715 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t2s2\" (UniqueName: \"kubernetes.io/projected/77877b54-4e6d-4431-9d9b-2dc5835fdd20-kube-api-access-9t2s2\") pod \"kube-proxy-5h7sl\" (UID: \"77877b54-4e6d-4431-9d9b-2dc5835fdd20\") " pod="kube-system/kube-proxy-5h7sl" Jan 14 01:31:21.728820 systemd[1]: Created slice kubepods-besteffort-pod77877b54_4e6d_4431_9d9b_2dc5835fdd20.slice - libcontainer container kubepods-besteffort-pod77877b54_4e6d_4431_9d9b_2dc5835fdd20.slice. Jan 14 01:31:22.145087 kubelet[2869]: E0114 01:31:22.143344 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:22.149583 containerd[1601]: time="2026-01-14T01:31:22.149336409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5h7sl,Uid:77877b54-4e6d-4431-9d9b-2dc5835fdd20,Namespace:kube-system,Attempt:0,}" Jan 14 01:31:22.217478 kubelet[2869]: E0114 01:31:22.215351 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:22.375605 kubelet[2869]: E0114 01:31:22.375410 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:22.498191 containerd[1601]: time="2026-01-14T01:31:22.486524818Z" level=info msg="connecting to shim 566f26b0ce61a8883eaa990d8b8da50222de3bf3c34a57e4318b62f2429d7d15" address="unix:///run/containerd/s/cd6ffdad0321aab4c64156264613b20dd190537aa2c2b311ec105d053e8775d6" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:31:22.891128 systemd[1]: Started cri-containerd-566f26b0ce61a8883eaa990d8b8da50222de3bf3c34a57e4318b62f2429d7d15.scope - libcontainer container 566f26b0ce61a8883eaa990d8b8da50222de3bf3c34a57e4318b62f2429d7d15. Jan 14 01:31:23.335681 kubelet[2869]: E0114 01:31:23.332881 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:23.414361 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 01:31:23.472172 kernel: audit: type=1334 audit(1768354283.398:423): prog-id=131 op=LOAD Jan 14 01:31:23.472579 kernel: audit: type=1334 audit(1768354283.416:424): prog-id=132 op=LOAD Jan 14 01:31:23.472665 kernel: audit: type=1300 audit(1768354283.416:424): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2930 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:23.398000 audit: BPF prog-id=131 op=LOAD Jan 14 01:31:23.416000 audit: BPF prog-id=132 op=LOAD Jan 14 01:31:23.416000 audit[2941]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2930 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:23.516310 kernel: audit: type=1327 audit(1768354283.416:424): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536366632366230636536316138383833656161393930643862386461 Jan 14 01:31:23.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536366632366230636536316138383833656161393930643862386461 Jan 14 01:31:23.524673 kubelet[2869]: E0114 01:31:23.524392 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:23.574105 kernel: audit: type=1334 audit(1768354283.416:425): prog-id=132 op=UNLOAD Jan 14 01:31:23.574210 kernel: audit: type=1300 audit(1768354283.416:425): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:23.416000 audit: BPF prog-id=132 op=UNLOAD Jan 14 01:31:23.416000 audit[2941]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:23.553808 systemd[1]: Created slice kubepods-besteffort-podfd4d50d0_9bdd_4479_af72_dc5a51ac101c.slice - libcontainer container kubepods-besteffort-podfd4d50d0_9bdd_4479_af72_dc5a51ac101c.slice. Jan 14 01:31:23.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536366632366230636536316138383833656161393930643862386461 Jan 14 01:31:23.609746 kernel: audit: type=1327 audit(1768354283.416:425): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536366632366230636536316138383833656161393930643862386461 Jan 14 01:31:23.617253 kubelet[2869]: I0114 01:31:23.584850 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fd4d50d0-9bdd-4479-af72-dc5a51ac101c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-c7hn8\" (UID: \"fd4d50d0-9bdd-4479-af72-dc5a51ac101c\") " pod="tigera-operator/tigera-operator-7dcd859c48-c7hn8" Jan 14 01:31:23.617253 kubelet[2869]: I0114 01:31:23.589652 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tjf\" (UniqueName: \"kubernetes.io/projected/fd4d50d0-9bdd-4479-af72-dc5a51ac101c-kube-api-access-p5tjf\") pod \"tigera-operator-7dcd859c48-c7hn8\" (UID: \"fd4d50d0-9bdd-4479-af72-dc5a51ac101c\") " pod="tigera-operator/tigera-operator-7dcd859c48-c7hn8" Jan 14 01:31:23.628783 kernel: audit: type=1334 audit(1768354283.417:426): prog-id=133 op=LOAD Jan 14 01:31:23.417000 audit: BPF prog-id=133 op=LOAD Jan 14 01:31:23.678301 kernel: audit: type=1300 audit(1768354283.417:426): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2930 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:23.417000 audit[2941]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2930 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:23.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536366632366230636536316138383833656161393930643862386461 Jan 14 01:31:23.708760 kernel: audit: type=1327 audit(1768354283.417:426): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536366632366230636536316138383833656161393930643862386461 Jan 14 01:31:23.417000 audit: BPF prog-id=134 op=LOAD Jan 14 01:31:23.417000 audit[2941]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2930 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:23.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536366632366230636536316138383833656161393930643862386461 Jan 14 01:31:23.417000 audit: BPF prog-id=134 op=UNLOAD Jan 14 01:31:23.417000 audit[2941]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:23.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536366632366230636536316138383833656161393930643862386461 Jan 14 01:31:23.417000 audit: BPF prog-id=133 op=UNLOAD Jan 14 01:31:23.417000 audit[2941]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:23.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536366632366230636536316138383833656161393930643862386461 Jan 14 01:31:23.417000 audit: BPF prog-id=135 op=LOAD Jan 14 01:31:23.417000 audit[2941]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2930 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:23.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536366632366230636536316138383833656161393930643862386461 Jan 14 01:31:24.243849 containerd[1601]: time="2026-01-14T01:31:24.243623156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5h7sl,Uid:77877b54-4e6d-4431-9d9b-2dc5835fdd20,Namespace:kube-system,Attempt:0,} returns sandbox id \"566f26b0ce61a8883eaa990d8b8da50222de3bf3c34a57e4318b62f2429d7d15\"" Jan 14 01:31:24.321181 kubelet[2869]: E0114 01:31:24.320557 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:24.390430 containerd[1601]: time="2026-01-14T01:31:24.390178352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-c7hn8,Uid:fd4d50d0-9bdd-4479-af72-dc5a51ac101c,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:31:24.406699 kubelet[2869]: E0114 01:31:24.390294 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:24.600613 containerd[1601]: time="2026-01-14T01:31:24.595067364Z" level=info msg="CreateContainer within sandbox \"566f26b0ce61a8883eaa990d8b8da50222de3bf3c34a57e4318b62f2429d7d15\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:31:24.854861 containerd[1601]: time="2026-01-14T01:31:24.853565269Z" level=info msg="connecting to shim 958046eb783c7088e1bf689bfff4f19200e251a5847967090295d96c29e00540" address="unix:///run/containerd/s/4deb5015e11a60d63c5791a5a828463eca15f90051415e88535f2d6d8df95448" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:31:24.856391 containerd[1601]: time="2026-01-14T01:31:24.856337862Z" level=info msg="Container 3c27f1b8f883cb29dfa36057d411a717212e36c97cb09879cd544c757af84b31: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:31:24.856776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774178322.mount: Deactivated successfully. Jan 14 01:31:24.962589 containerd[1601]: time="2026-01-14T01:31:24.962299689Z" level=info msg="CreateContainer within sandbox \"566f26b0ce61a8883eaa990d8b8da50222de3bf3c34a57e4318b62f2429d7d15\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c27f1b8f883cb29dfa36057d411a717212e36c97cb09879cd544c757af84b31\"" Jan 14 01:31:25.054405 containerd[1601]: time="2026-01-14T01:31:25.048544128Z" level=info msg="StartContainer for \"3c27f1b8f883cb29dfa36057d411a717212e36c97cb09879cd544c757af84b31\"" Jan 14 01:31:25.386442 systemd[1]: Started cri-containerd-958046eb783c7088e1bf689bfff4f19200e251a5847967090295d96c29e00540.scope - libcontainer container 958046eb783c7088e1bf689bfff4f19200e251a5847967090295d96c29e00540. Jan 14 01:31:25.438405 containerd[1601]: time="2026-01-14T01:31:25.429338904Z" level=info msg="connecting to shim 3c27f1b8f883cb29dfa36057d411a717212e36c97cb09879cd544c757af84b31" address="unix:///run/containerd/s/cd6ffdad0321aab4c64156264613b20dd190537aa2c2b311ec105d053e8775d6" protocol=ttrpc version=3 Jan 14 01:31:25.841877 kubelet[2869]: E0114 01:31:25.838336 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:25.891000 audit: BPF prog-id=136 op=LOAD Jan 14 01:31:25.905000 audit: BPF prog-id=137 op=LOAD Jan 14 01:31:25.905000 audit[2996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000214238 a2=98 a3=0 items=0 ppid=2983 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:25.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935383034366562373833633730383865316266363839626666663466 Jan 14 01:31:25.921000 audit: BPF prog-id=137 op=UNLOAD Jan 14 01:31:25.921000 audit[2996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:25.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935383034366562373833633730383865316266363839626666663466 Jan 14 01:31:25.926000 audit: BPF prog-id=138 op=LOAD Jan 14 01:31:25.926000 audit[2996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000214488 a2=98 a3=0 items=0 ppid=2983 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:25.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935383034366562373833633730383865316266363839626666663466 Jan 14 01:31:25.926000 audit: BPF prog-id=139 op=LOAD Jan 14 01:31:25.926000 audit[2996]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000214218 a2=98 a3=0 items=0 ppid=2983 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:25.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935383034366562373833633730383865316266363839626666663466 Jan 14 01:31:25.926000 audit: BPF prog-id=139 op=UNLOAD Jan 14 01:31:25.926000 audit[2996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:25.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935383034366562373833633730383865316266363839626666663466 Jan 14 01:31:25.926000 audit: BPF prog-id=138 op=UNLOAD Jan 14 01:31:25.926000 audit[2996]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:25.926000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935383034366562373833633730383865316266363839626666663466 Jan 14 01:31:25.988000 audit: BPF prog-id=140 op=LOAD Jan 14 01:31:25.988000 audit[2996]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002146e8 a2=98 a3=0 items=0 ppid=2983 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:25.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935383034366562373833633730383865316266363839626666663466 Jan 14 01:31:26.428501 systemd[1]: Started cri-containerd-3c27f1b8f883cb29dfa36057d411a717212e36c97cb09879cd544c757af84b31.scope - libcontainer container 3c27f1b8f883cb29dfa36057d411a717212e36c97cb09879cd544c757af84b31. Jan 14 01:31:27.622556 containerd[1601]: time="2026-01-14T01:31:27.621527889Z" level=error msg="get state for 958046eb783c7088e1bf689bfff4f19200e251a5847967090295d96c29e00540" error="context deadline exceeded" Jan 14 01:31:27.808718 containerd[1601]: time="2026-01-14T01:31:27.696223197Z" level=warning msg="unknown status" status=0 Jan 14 01:31:29.031877 kubelet[2869]: E0114 01:31:28.822556 2869 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.265s" Jan 14 01:31:29.587548 kubelet[2869]: E0114 01:31:29.587370 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:29.620873 containerd[1601]: time="2026-01-14T01:31:29.620432051Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 14 01:31:29.694150 containerd[1601]: time="2026-01-14T01:31:29.694107864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-c7hn8,Uid:fd4d50d0-9bdd-4479-af72-dc5a51ac101c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"958046eb783c7088e1bf689bfff4f19200e251a5847967090295d96c29e00540\"" Jan 14 01:31:29.695000 audit: BPF prog-id=141 op=LOAD Jan 14 01:31:29.706448 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 14 01:31:29.707453 kernel: audit: type=1334 audit(1768354289.695:439): prog-id=141 op=LOAD Jan 14 01:31:29.707696 containerd[1601]: time="2026-01-14T01:31:29.700599098Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:31:29.695000 audit[3015]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2930 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:29.727004 kernel: audit: type=1300 audit(1768354289.695:439): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2930 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:29.727196 kernel: audit: type=1327 audit(1768354289.695:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323766316238663838336362323964666133363035376434313161 Jan 14 01:31:29.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323766316238663838336362323964666133363035376434313161 Jan 14 01:31:29.695000 audit: BPF prog-id=142 op=LOAD Jan 14 01:31:29.695000 audit[3015]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2930 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:29.778172 kernel: audit: type=1334 audit(1768354289.695:440): prog-id=142 op=LOAD Jan 14 01:31:29.778377 kernel: audit: type=1300 audit(1768354289.695:440): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2930 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:29.778504 kernel: audit: type=1327 audit(1768354289.695:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323766316238663838336362323964666133363035376434313161 Jan 14 01:31:29.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323766316238663838336362323964666133363035376434313161 Jan 14 01:31:29.787017 containerd[1601]: time="2026-01-14T01:31:29.786704689Z" level=info msg="StartContainer for \"3c27f1b8f883cb29dfa36057d411a717212e36c97cb09879cd544c757af84b31\" returns successfully" Jan 14 01:31:29.695000 audit: BPF prog-id=142 op=UNLOAD Jan 14 01:31:29.799378 kernel: audit: type=1334 audit(1768354289.695:441): prog-id=142 op=UNLOAD Jan 14 01:31:29.799531 kernel: audit: type=1300 audit(1768354289.695:441): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:29.695000 audit[3015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:29.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323766316238663838336362323964666133363035376434313161 Jan 14 01:31:29.830154 kernel: audit: type=1327 audit(1768354289.695:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323766316238663838336362323964666133363035376434313161 Jan 14 01:31:29.830258 kernel: audit: type=1334 audit(1768354289.695:442): prog-id=141 op=UNLOAD Jan 14 01:31:29.695000 audit: BPF prog-id=141 op=UNLOAD Jan 14 01:31:29.695000 audit[3015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2930 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:29.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323766316238663838336362323964666133363035376434313161 Jan 14 01:31:29.696000 audit: BPF prog-id=143 op=LOAD Jan 14 01:31:29.696000 audit[3015]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2930 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:29.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363323766316238663838336362323964666133363035376434313161 Jan 14 01:31:30.169000 audit[3090]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:30.170000 audit[3089]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.170000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc653c3430 a2=0 a3=7ffc653c341c items=0 ppid=3030 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.170000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:31:30.169000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe70abad00 a2=0 a3=7ffe70abacec items=0 ppid=3030 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:31:30.174000 audit[3092]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:30.174000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe50a77590 a2=0 a3=7ffe50a7757c items=0 ppid=3030 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:31:30.177000 audit[3094]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:30.177000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4825da40 a2=0 a3=7ffe4825da2c items=0 ppid=3030 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.177000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:31:30.183000 audit[3096]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.183000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee36e93a0 a2=0 a3=7ffee36e938c items=0 ppid=3030 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:31:30.188000 audit[3097]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.188000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff9b5fe890 a2=0 a3=7fff9b5fe87c items=0 ppid=3030 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:31:30.276000 audit[3098]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.276000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc21048cc0 a2=0 a3=7ffc21048cac items=0 ppid=3030 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:31:30.284000 audit[3100]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.284000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffdc558c60 a2=0 a3=7fffdc558c4c items=0 ppid=3030 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 01:31:30.295000 audit[3103]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.295000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff69298a50 a2=0 a3=7fff69298a3c items=0 ppid=3030 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.295000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 01:31:30.298000 audit[3104]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.298000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff72ee69d0 a2=0 a3=7fff72ee69bc items=0 ppid=3030 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.298000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:31:30.306000 audit[3106]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.306000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc580f94c0 a2=0 a3=7ffc580f94ac items=0 ppid=3030 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:31:30.309000 audit[3107]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.309000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7f27dde0 a2=0 a3=7ffc7f27ddcc items=0 ppid=3030 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.309000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:31:30.317000 audit[3109]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.317000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff2d46bff0 a2=0 a3=7fff2d46bfdc items=0 ppid=3030 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:31:30.329000 audit[3112]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.329000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdb5039d60 a2=0 a3=7ffdb5039d4c items=0 ppid=3030 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.329000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:31:30.333000 audit[3113]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.333000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd43583ec0 a2=0 a3=7ffd43583eac items=0 ppid=3030 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:31:30.341000 audit[3115]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.341000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8275e070 a2=0 a3=7ffd8275e05c items=0 ppid=3030 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:31:30.345000 audit[3116]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.345000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2a5ee060 a2=0 a3=7fff2a5ee04c items=0 ppid=3030 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:31:30.354000 audit[3118]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.354000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd170283a0 a2=0 a3=7ffd1702838c items=0 ppid=3030 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.354000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:31:30.366000 audit[3121]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.366000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4505f330 a2=0 a3=7ffe4505f31c items=0 ppid=3030 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.366000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:31:30.379000 audit[3124]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.379000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeeca4ac40 a2=0 a3=7ffeeca4ac2c items=0 ppid=3030 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.379000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:31:30.383000 audit[3125]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.383000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc8034000 a2=0 a3=7fffc8033fec items=0 ppid=3030 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.383000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:31:30.391000 audit[3127]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.391000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffcb81b660 a2=0 a3=7fffcb81b64c items=0 ppid=3030 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.391000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:31:30.404000 audit[3130]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.404000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffa441e00 a2=0 a3=7ffffa441dec items=0 ppid=3030 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:31:30.407000 audit[3131]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.407000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc0e36330 a2=0 a3=7ffcc0e3631c items=0 ppid=3030 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:31:30.414000 audit[3133]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:31:30.414000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff721edde0 a2=0 a3=7fff721eddcc items=0 ppid=3030 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:30.414000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:31:30.613622 kubelet[2869]: E0114 01:31:30.613385 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:31.075000 audit[3139]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:31.075000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff561da740 a2=0 a3=7fff561da72c items=0 ppid=3030 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.075000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:31.093000 audit[3139]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:31.093000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff561da740 a2=0 a3=7fff561da72c items=0 ppid=3030 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:31.098000 audit[3144]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.098000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffda7132b50 a2=0 a3=7ffda7132b3c items=0 ppid=3030 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:31:31.106000 audit[3146]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.106000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdcf5369e0 a2=0 a3=7ffdcf5369cc items=0 ppid=3030 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.106000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 01:31:31.118000 audit[3149]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.118000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd815cd3a0 a2=0 a3=7ffd815cd38c items=0 ppid=3030 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.118000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 01:31:31.123000 audit[3150]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.123000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce61706a0 a2=0 a3=7ffce617068c items=0 ppid=3030 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:31:31.132000 audit[3152]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.132000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffef3c03ad0 a2=0 a3=7ffef3c03abc items=0 ppid=3030 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.132000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:31:31.136000 audit[3153]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.136000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd329e6fc0 a2=0 a3=7ffd329e6fac items=0 ppid=3030 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.136000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:31:31.143000 audit[3155]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.143000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe73b175e0 a2=0 a3=7ffe73b175cc items=0 ppid=3030 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.143000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 01:31:31.152000 audit[3158]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.152000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff3102cc50 a2=0 a3=7fff3102cc3c items=0 ppid=3030 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:31:31.155000 audit[3159]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.155000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2c35d6c0 a2=0 a3=7ffe2c35d6ac items=0 ppid=3030 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:31:31.163000 audit[3161]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.163000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff13472c60 a2=0 a3=7fff13472c4c items=0 ppid=3030 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.163000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:31:31.167000 audit[3162]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.167000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffed906610 a2=0 a3=7fffed9065fc items=0 ppid=3030 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.167000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:31:31.175000 audit[3164]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.175000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd90f533c0 a2=0 a3=7ffd90f533ac items=0 ppid=3030 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:31:31.186000 audit[3167]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.186000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe27bd6d10 a2=0 a3=7ffe27bd6cfc items=0 ppid=3030 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:31:31.199000 audit[3170]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.199000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe3a1b97a0 a2=0 a3=7ffe3a1b978c items=0 ppid=3030 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 01:31:31.203000 audit[3171]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.203000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffddf4d6ab0 a2=0 a3=7ffddf4d6a9c items=0 ppid=3030 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.203000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:31:31.209000 audit[3173]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.209000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe89f27260 a2=0 a3=7ffe89f2724c items=0 ppid=3030 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.209000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:31:31.219000 audit[3176]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.219000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffee7ca6460 a2=0 a3=7ffee7ca644c items=0 ppid=3030 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:31:31.222000 audit[3177]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.222000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe5810090 a2=0 a3=7fffe581007c items=0 ppid=3030 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:31:31.228000 audit[3179]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.228000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff1ae65520 a2=0 a3=7fff1ae6550c items=0 ppid=3030 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.228000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:31:31.230000 audit[3180]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.230000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffda2b7ac0 a2=0 a3=7fffda2b7aac items=0 ppid=3030 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:31:31.236000 audit[3182]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.236000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd493323d0 a2=0 a3=7ffd493323bc items=0 ppid=3030 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:31:31.245000 audit[3185]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:31:31.245000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcec710480 a2=0 a3=7ffcec71046c items=0 ppid=3030 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.245000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:31:31.253000 audit[3187]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:31:31.253000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffdc6bc4530 a2=0 a3=7ffdc6bc451c items=0 ppid=3030 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.253000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:31.253000 audit[3187]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:31:31.253000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffdc6bc4530 a2=0 a3=7ffdc6bc451c items=0 ppid=3030 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:31.253000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:31.615988 kubelet[2869]: E0114 01:31:31.615830 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:31.816062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684835923.mount: Deactivated successfully. Jan 14 01:31:34.381324 containerd[1601]: time="2026-01-14T01:31:34.381156954Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:34.382777 containerd[1601]: time="2026-01-14T01:31:34.382689635Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 01:31:34.385035 containerd[1601]: time="2026-01-14T01:31:34.384749022Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:34.387998 containerd[1601]: time="2026-01-14T01:31:34.387729735Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:34.388513 containerd[1601]: time="2026-01-14T01:31:34.388432106Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.687806759s" Jan 14 01:31:34.388513 containerd[1601]: time="2026-01-14T01:31:34.388495073Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:31:34.395579 containerd[1601]: time="2026-01-14T01:31:34.395517290Z" level=info msg="CreateContainer within sandbox \"958046eb783c7088e1bf689bfff4f19200e251a5847967090295d96c29e00540\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:31:34.412101 containerd[1601]: time="2026-01-14T01:31:34.411265719Z" level=info msg="Container e59deec6ea27df47ac8953eafcc61391aef8e4a2323d484fe79d766e0302a3ca: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:31:34.422884 containerd[1601]: time="2026-01-14T01:31:34.422612707Z" level=info msg="CreateContainer within sandbox \"958046eb783c7088e1bf689bfff4f19200e251a5847967090295d96c29e00540\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e59deec6ea27df47ac8953eafcc61391aef8e4a2323d484fe79d766e0302a3ca\"" Jan 14 01:31:34.424508 containerd[1601]: time="2026-01-14T01:31:34.424377892Z" level=info msg="StartContainer for \"e59deec6ea27df47ac8953eafcc61391aef8e4a2323d484fe79d766e0302a3ca\"" Jan 14 01:31:34.426574 containerd[1601]: time="2026-01-14T01:31:34.426357170Z" level=info msg="connecting to shim e59deec6ea27df47ac8953eafcc61391aef8e4a2323d484fe79d766e0302a3ca" address="unix:///run/containerd/s/4deb5015e11a60d63c5791a5a828463eca15f90051415e88535f2d6d8df95448" protocol=ttrpc version=3 Jan 14 01:31:34.466346 systemd[1]: Started cri-containerd-e59deec6ea27df47ac8953eafcc61391aef8e4a2323d484fe79d766e0302a3ca.scope - libcontainer container e59deec6ea27df47ac8953eafcc61391aef8e4a2323d484fe79d766e0302a3ca. Jan 14 01:31:34.493000 audit: BPF prog-id=144 op=LOAD Jan 14 01:31:34.494000 audit: BPF prog-id=145 op=LOAD Jan 14 01:31:34.494000 audit[3196]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2983 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:34.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396465656336656132376466343761633839353365616663633631 Jan 14 01:31:34.495000 audit: BPF prog-id=145 op=UNLOAD Jan 14 01:31:34.495000 audit[3196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:34.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396465656336656132376466343761633839353365616663633631 Jan 14 01:31:34.495000 audit: BPF prog-id=146 op=LOAD Jan 14 01:31:34.495000 audit[3196]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2983 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:34.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396465656336656132376466343761633839353365616663633631 Jan 14 01:31:34.495000 audit: BPF prog-id=147 op=LOAD Jan 14 01:31:34.495000 audit[3196]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2983 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:34.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396465656336656132376466343761633839353365616663633631 Jan 14 01:31:34.495000 audit: BPF prog-id=147 op=UNLOAD Jan 14 01:31:34.495000 audit[3196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:34.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396465656336656132376466343761633839353365616663633631 Jan 14 01:31:34.495000 audit: BPF prog-id=146 op=UNLOAD Jan 14 01:31:34.495000 audit[3196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2983 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:34.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396465656336656132376466343761633839353365616663633631 Jan 14 01:31:34.495000 audit: BPF prog-id=148 op=LOAD Jan 14 01:31:34.495000 audit[3196]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2983 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:34.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396465656336656132376466343761633839353365616663633631 Jan 14 01:31:34.556295 containerd[1601]: time="2026-01-14T01:31:34.556241732Z" level=info msg="StartContainer for \"e59deec6ea27df47ac8953eafcc61391aef8e4a2323d484fe79d766e0302a3ca\" returns successfully" Jan 14 01:31:34.659704 kubelet[2869]: I0114 01:31:34.658698 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5h7sl" podStartSLOduration=13.658534027 podStartE2EDuration="13.658534027s" podCreationTimestamp="2026-01-14 01:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:31:30.632693603 +0000 UTC m=+15.281965194" watchObservedRunningTime="2026-01-14 01:31:34.658534027 +0000 UTC m=+19.307805598" Jan 14 01:31:34.659704 kubelet[2869]: I0114 01:31:34.658862 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-c7hn8" podStartSLOduration=6.96801733 podStartE2EDuration="11.658853271s" podCreationTimestamp="2026-01-14 01:31:23 +0000 UTC" firstStartedPulling="2026-01-14 01:31:29.699330117 +0000 UTC m=+14.348601698" lastFinishedPulling="2026-01-14 01:31:34.390166058 +0000 UTC m=+19.039437639" observedRunningTime="2026-01-14 01:31:34.656773496 +0000 UTC m=+19.306045087" watchObservedRunningTime="2026-01-14 01:31:34.658853271 +0000 UTC m=+19.308124862" Jan 14 01:31:40.525789 sudo[1844]: pam_unix(sudo:session): session closed for user root Jan 14 01:31:40.544318 kernel: kauditd_printk_skb: 180 callbacks suppressed Jan 14 01:31:40.544413 kernel: audit: type=1106 audit(1768354300.525:503): pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:31:40.525000 audit[1844]: USER_END pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:31:40.525000 audit[1844]: CRED_DISP pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:31:40.556256 sshd[1843]: Connection closed by 10.0.0.1 port 47326 Jan 14 01:31:40.559114 kernel: audit: type=1104 audit(1768354300.525:504): pid=1844 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:31:40.560243 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Jan 14 01:31:40.562000 audit[1839]: USER_END pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:31:40.582386 kernel: audit: type=1106 audit(1768354300.562:505): pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:31:40.571649 systemd-logind[1583]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:31:40.575768 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:47326.service: Deactivated successfully. Jan 14 01:31:40.581179 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:31:40.582097 systemd[1]: session-10.scope: Consumed 19.216s CPU time, 216.4M memory peak. Jan 14 01:31:40.562000 audit[1839]: CRED_DISP pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:31:40.603253 kernel: audit: type=1104 audit(1768354300.562:506): pid=1839 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:31:40.585149 systemd-logind[1583]: Removed session 10. Jan 14 01:31:40.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.15:22-10.0.0.1:47326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:40.621194 kernel: audit: type=1131 audit(1768354300.575:507): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.15:22-10.0.0.1:47326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:31:41.711000 audit[3288]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:41.721464 kernel: audit: type=1325 audit(1768354301.711:508): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:41.711000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffedd373980 a2=0 a3=7ffedd37396c items=0 ppid=3030 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:41.751134 kernel: audit: type=1300 audit(1768354301.711:508): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffedd373980 a2=0 a3=7ffedd37396c items=0 ppid=3030 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:41.756780 kernel: audit: type=1327 audit(1768354301.711:508): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:41.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:41.759000 audit[3288]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:41.774277 kernel: audit: type=1325 audit(1768354301.759:509): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:41.759000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedd373980 a2=0 a3=0 items=0 ppid=3030 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:41.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:41.802432 kernel: audit: type=1300 audit(1768354301.759:509): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedd373980 a2=0 a3=0 items=0 ppid=3030 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:41.819000 audit[3290]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:41.819000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc285dd730 a2=0 a3=7ffc285dd71c items=0 ppid=3030 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:41.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:41.827000 audit[3290]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:41.827000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc285dd730 a2=0 a3=0 items=0 ppid=3030 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:41.827000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:47.340457 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:31:47.343304 kernel: audit: type=1325 audit(1768354307.319:512): table=filter:109 family=2 entries=16 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:47.343419 kernel: audit: type=1300 audit(1768354307.319:512): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff6c0d3090 a2=0 a3=7fff6c0d307c items=0 ppid=3030 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:47.319000 audit[3292]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:47.319000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff6c0d3090 a2=0 a3=7fff6c0d307c items=0 ppid=3030 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:47.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:47.387152 kernel: audit: type=1327 audit(1768354307.319:512): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:47.389000 audit[3292]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:47.406187 kernel: audit: type=1325 audit(1768354307.389:513): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:47.410395 kernel: audit: type=1300 audit(1768354307.389:513): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6c0d3090 a2=0 a3=0 items=0 ppid=3030 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:47.389000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6c0d3090 a2=0 a3=0 items=0 ppid=3030 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:47.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:47.445529 kernel: audit: type=1327 audit(1768354307.389:513): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:47.449000 audit[3294]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:47.449000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdee698690 a2=0 a3=7ffdee69867c items=0 ppid=3030 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:47.489767 kernel: audit: type=1325 audit(1768354307.449:514): table=filter:111 family=2 entries=17 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:47.490043 kernel: audit: type=1300 audit(1768354307.449:514): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdee698690 a2=0 a3=7ffdee69867c items=0 ppid=3030 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:47.490082 kernel: audit: type=1327 audit(1768354307.449:514): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:47.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:47.503000 audit[3294]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:47.517165 kernel: audit: type=1325 audit(1768354307.503:515): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:47.503000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdee698690 a2=0 a3=0 items=0 ppid=3030 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:47.503000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:48.580000 audit[3296]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:48.580000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc01d2d6c0 a2=0 a3=7ffc01d2d6ac items=0 ppid=3030 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:48.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:48.584000 audit[3296]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:48.584000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc01d2d6c0 a2=0 a3=0 items=0 ppid=3030 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:48.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:51.231000 audit[3298]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:51.231000 audit[3298]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffb5d0e5e0 a2=0 a3=7fffb5d0e5cc items=0 ppid=3030 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:51.231000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:51.249000 audit[3298]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3298 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:51.249000 audit[3298]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb5d0e5e0 a2=0 a3=0 items=0 ppid=3030 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:51.249000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:51.291000 audit[3300]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:51.291000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd19fedc70 a2=0 a3=7ffd19fedc5c items=0 ppid=3030 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:51.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:51.303000 audit[3300]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:51.303000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd19fedc70 a2=0 a3=0 items=0 ppid=3030 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:51.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:51.380249 systemd[1]: Created slice kubepods-besteffort-pod36eed115_691e_460d_a5cd_d20b4b03398f.slice - libcontainer container kubepods-besteffort-pod36eed115_691e_460d_a5cd_d20b4b03398f.slice. Jan 14 01:31:51.467125 kubelet[2869]: I0114 01:31:51.466828 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36eed115-691e-460d-a5cd-d20b4b03398f-tigera-ca-bundle\") pod \"calico-typha-dd644c869-hmddh\" (UID: \"36eed115-691e-460d-a5cd-d20b4b03398f\") " pod="calico-system/calico-typha-dd644c869-hmddh" Jan 14 01:31:51.468470 kubelet[2869]: I0114 01:31:51.467552 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/36eed115-691e-460d-a5cd-d20b4b03398f-typha-certs\") pod \"calico-typha-dd644c869-hmddh\" (UID: \"36eed115-691e-460d-a5cd-d20b4b03398f\") " pod="calico-system/calico-typha-dd644c869-hmddh" Jan 14 01:31:51.469630 kubelet[2869]: I0114 01:31:51.469185 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4b2\" (UniqueName: \"kubernetes.io/projected/36eed115-691e-460d-a5cd-d20b4b03398f-kube-api-access-qp4b2\") pod \"calico-typha-dd644c869-hmddh\" (UID: \"36eed115-691e-460d-a5cd-d20b4b03398f\") " pod="calico-system/calico-typha-dd644c869-hmddh" Jan 14 01:31:51.627287 systemd[1]: Created slice kubepods-besteffort-pod912ee544_2476_46ca_a747_1f192d5a1d61.slice - libcontainer container kubepods-besteffort-pod912ee544_2476_46ca_a747_1f192d5a1d61.slice. Jan 14 01:31:51.671055 kubelet[2869]: I0114 01:31:51.670799 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/912ee544-2476-46ca-a747-1f192d5a1d61-var-lib-calico\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.671055 kubelet[2869]: I0114 01:31:51.671025 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mm6m\" (UniqueName: \"kubernetes.io/projected/912ee544-2476-46ca-a747-1f192d5a1d61-kube-api-access-6mm6m\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.671055 kubelet[2869]: I0114 01:31:51.671054 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/912ee544-2476-46ca-a747-1f192d5a1d61-cni-net-dir\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.671055 kubelet[2869]: I0114 01:31:51.671069 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/912ee544-2476-46ca-a747-1f192d5a1d61-cni-log-dir\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.671352 kubelet[2869]: I0114 01:31:51.671135 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/912ee544-2476-46ca-a747-1f192d5a1d61-flexvol-driver-host\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.671352 kubelet[2869]: I0114 01:31:51.671216 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/912ee544-2476-46ca-a747-1f192d5a1d61-policysync\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.671352 kubelet[2869]: I0114 01:31:51.671243 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/912ee544-2476-46ca-a747-1f192d5a1d61-xtables-lock\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.671352 kubelet[2869]: I0114 01:31:51.671265 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/912ee544-2476-46ca-a747-1f192d5a1d61-lib-modules\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.671352 kubelet[2869]: I0114 01:31:51.671280 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912ee544-2476-46ca-a747-1f192d5a1d61-tigera-ca-bundle\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.671602 kubelet[2869]: I0114 01:31:51.671300 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/912ee544-2476-46ca-a747-1f192d5a1d61-cni-bin-dir\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.671602 kubelet[2869]: I0114 01:31:51.671590 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/912ee544-2476-46ca-a747-1f192d5a1d61-node-certs\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.671672 kubelet[2869]: I0114 01:31:51.671612 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/912ee544-2476-46ca-a747-1f192d5a1d61-var-run-calico\") pod \"calico-node-pgr6m\" (UID: \"912ee544-2476-46ca-a747-1f192d5a1d61\") " pod="calico-system/calico-node-pgr6m" Jan 14 01:31:51.706300 kubelet[2869]: E0114 01:31:51.705765 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:51.708409 containerd[1601]: time="2026-01-14T01:31:51.707570562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dd644c869-hmddh,Uid:36eed115-691e-460d-a5cd-d20b4b03398f,Namespace:calico-system,Attempt:0,}" Jan 14 01:31:51.779355 kubelet[2869]: E0114 01:31:51.778783 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.779355 kubelet[2869]: W0114 01:31:51.779039 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.780074 containerd[1601]: time="2026-01-14T01:31:51.779639586Z" level=info msg="connecting to shim 89f98389553034a2ca37d9002a4a5008db68b044330270b799f871b4f94953a6" address="unix:///run/containerd/s/0acc7ff2ee6b320211f1200969d174b0f7a6d98a36a2cc5aeb7e93e7ab7e0e9c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:31:51.781595 kubelet[2869]: E0114 01:31:51.780706 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.781595 kubelet[2869]: E0114 01:31:51.781215 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.781595 kubelet[2869]: W0114 01:31:51.781226 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.781595 kubelet[2869]: E0114 01:31:51.781240 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.783317 kubelet[2869]: E0114 01:31:51.783299 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.783432 kubelet[2869]: W0114 01:31:51.783399 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.783432 kubelet[2869]: E0114 01:31:51.783416 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.785167 kubelet[2869]: E0114 01:31:51.784845 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.785167 kubelet[2869]: W0114 01:31:51.785120 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.785167 kubelet[2869]: E0114 01:31:51.785136 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.785659 kubelet[2869]: E0114 01:31:51.785574 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.785659 kubelet[2869]: W0114 01:31:51.785589 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.785659 kubelet[2869]: E0114 01:31:51.785603 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.787817 kubelet[2869]: E0114 01:31:51.787556 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.787817 kubelet[2869]: W0114 01:31:51.787607 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.787817 kubelet[2869]: E0114 01:31:51.787618 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.790335 kubelet[2869]: E0114 01:31:51.790068 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.790335 kubelet[2869]: W0114 01:31:51.790172 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.790335 kubelet[2869]: E0114 01:31:51.790186 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.790569 kubelet[2869]: E0114 01:31:51.790525 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.790569 kubelet[2869]: W0114 01:31:51.790534 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.790569 kubelet[2869]: E0114 01:31:51.790542 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.791827 kubelet[2869]: E0114 01:31:51.791777 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.791827 kubelet[2869]: W0114 01:31:51.791799 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.791827 kubelet[2869]: E0114 01:31:51.791814 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.793441 kubelet[2869]: E0114 01:31:51.792279 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.793441 kubelet[2869]: W0114 01:31:51.792290 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.793441 kubelet[2869]: E0114 01:31:51.792300 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.793441 kubelet[2869]: E0114 01:31:51.793348 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.793441 kubelet[2869]: W0114 01:31:51.793360 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.793441 kubelet[2869]: E0114 01:31:51.793375 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.794689 kubelet[2869]: E0114 01:31:51.794413 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.794689 kubelet[2869]: W0114 01:31:51.794483 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.794689 kubelet[2869]: E0114 01:31:51.794498 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.796841 kubelet[2869]: E0114 01:31:51.796724 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.796841 kubelet[2869]: W0114 01:31:51.796791 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.796841 kubelet[2869]: E0114 01:31:51.796807 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.798149 kubelet[2869]: E0114 01:31:51.797301 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.798149 kubelet[2869]: W0114 01:31:51.797313 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.798149 kubelet[2869]: E0114 01:31:51.797325 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.798149 kubelet[2869]: E0114 01:31:51.797844 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.798149 kubelet[2869]: W0114 01:31:51.797855 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.798149 kubelet[2869]: E0114 01:31:51.797867 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.798321 kubelet[2869]: E0114 01:31:51.798240 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.798321 kubelet[2869]: W0114 01:31:51.798251 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.798321 kubelet[2869]: E0114 01:31:51.798260 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.799881 kubelet[2869]: E0114 01:31:51.799442 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.799881 kubelet[2869]: W0114 01:31:51.799492 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.799881 kubelet[2869]: E0114 01:31:51.799502 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.801848 kubelet[2869]: E0114 01:31:51.801768 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.801848 kubelet[2869]: W0114 01:31:51.801788 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.801848 kubelet[2869]: E0114 01:31:51.801805 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.803233 kubelet[2869]: E0114 01:31:51.802508 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.803233 kubelet[2869]: W0114 01:31:51.802518 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.803233 kubelet[2869]: E0114 01:31:51.802528 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.808407 kubelet[2869]: E0114 01:31:51.808164 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.808407 kubelet[2869]: W0114 01:31:51.808375 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.808407 kubelet[2869]: E0114 01:31:51.808397 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.811107 kubelet[2869]: E0114 01:31:51.810576 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:31:51.816288 kubelet[2869]: E0114 01:31:51.816254 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.816288 kubelet[2869]: W0114 01:31:51.816274 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.816288 kubelet[2869]: E0114 01:31:51.816290 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.818709 kubelet[2869]: E0114 01:31:51.818597 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.818709 kubelet[2869]: W0114 01:31:51.818670 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.818709 kubelet[2869]: E0114 01:31:51.818689 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.824669 kubelet[2869]: E0114 01:31:51.824608 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.824669 kubelet[2869]: W0114 01:31:51.824630 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.824669 kubelet[2869]: E0114 01:31:51.824649 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.830622 kubelet[2869]: E0114 01:31:51.830296 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.830622 kubelet[2869]: W0114 01:31:51.830373 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.830622 kubelet[2869]: E0114 01:31:51.830390 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.831722 kubelet[2869]: E0114 01:31:51.831642 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.831722 kubelet[2869]: W0114 01:31:51.831716 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.831816 kubelet[2869]: E0114 01:31:51.831732 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.836164 kubelet[2869]: E0114 01:31:51.835819 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.836164 kubelet[2869]: W0114 01:31:51.836109 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.836164 kubelet[2869]: E0114 01:31:51.836128 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.839484 kubelet[2869]: E0114 01:31:51.839240 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.839484 kubelet[2869]: W0114 01:31:51.839263 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.839484 kubelet[2869]: E0114 01:31:51.839280 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.873381 kubelet[2869]: E0114 01:31:51.873341 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.873586 kubelet[2869]: W0114 01:31:51.873563 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.873692 kubelet[2869]: E0114 01:31:51.873671 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.874723 kubelet[2869]: E0114 01:31:51.874704 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.874828 kubelet[2869]: W0114 01:31:51.874811 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.875232 kubelet[2869]: E0114 01:31:51.875213 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.876413 kubelet[2869]: E0114 01:31:51.876394 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.876583 kubelet[2869]: W0114 01:31:51.876496 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.876583 kubelet[2869]: E0114 01:31:51.876517 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.878429 kubelet[2869]: E0114 01:31:51.877545 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.878429 kubelet[2869]: W0114 01:31:51.877563 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.878429 kubelet[2869]: E0114 01:31:51.877578 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.885851 kubelet[2869]: E0114 01:31:51.885194 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.885851 kubelet[2869]: W0114 01:31:51.885232 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.885851 kubelet[2869]: E0114 01:31:51.885266 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.886680 kubelet[2869]: E0114 01:31:51.886658 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.887242 kubelet[2869]: W0114 01:31:51.887219 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.887441 kubelet[2869]: E0114 01:31:51.887425 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.888317 systemd[1]: Started cri-containerd-89f98389553034a2ca37d9002a4a5008db68b044330270b799f871b4f94953a6.scope - libcontainer container 89f98389553034a2ca37d9002a4a5008db68b044330270b799f871b4f94953a6. Jan 14 01:31:51.892334 kubelet[2869]: E0114 01:31:51.891858 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.892813 kubelet[2869]: W0114 01:31:51.892649 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.893542 kubelet[2869]: E0114 01:31:51.893378 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.897127 kubelet[2869]: E0114 01:31:51.896810 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.897437 kubelet[2869]: W0114 01:31:51.897419 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.898277 kubelet[2869]: E0114 01:31:51.898133 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.901751 kubelet[2869]: E0114 01:31:51.901709 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.902187 kubelet[2869]: W0114 01:31:51.902110 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.902286 kubelet[2869]: E0114 01:31:51.902265 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.902644 kubelet[2869]: E0114 01:31:51.902627 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.902788 kubelet[2869]: W0114 01:31:51.902710 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.902788 kubelet[2869]: E0114 01:31:51.902731 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.903521 kubelet[2869]: E0114 01:31:51.903424 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.903521 kubelet[2869]: W0114 01:31:51.903441 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.903521 kubelet[2869]: E0114 01:31:51.903455 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.904369 kubelet[2869]: E0114 01:31:51.904351 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.904486 kubelet[2869]: W0114 01:31:51.904436 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.904486 kubelet[2869]: E0114 01:31:51.904456 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.906246 kubelet[2869]: E0114 01:31:51.906199 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.906246 kubelet[2869]: W0114 01:31:51.906215 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.906246 kubelet[2869]: E0114 01:31:51.906228 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.907068 kubelet[2869]: I0114 01:31:51.906862 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a92d2670-8bc7-4318-8d73-b12be2d0a45e-kubelet-dir\") pod \"csi-node-driver-9jt56\" (UID: \"a92d2670-8bc7-4318-8d73-b12be2d0a45e\") " pod="calico-system/csi-node-driver-9jt56" Jan 14 01:31:51.907389 kubelet[2869]: E0114 01:31:51.907371 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.908080 kubelet[2869]: W0114 01:31:51.907456 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.908199 kubelet[2869]: E0114 01:31:51.908179 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.909228 kubelet[2869]: E0114 01:31:51.909209 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.909314 kubelet[2869]: W0114 01:31:51.909298 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.909676 kubelet[2869]: E0114 01:31:51.909454 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.910559 kubelet[2869]: E0114 01:31:51.910541 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.910752 kubelet[2869]: W0114 01:31:51.910710 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.910752 kubelet[2869]: E0114 01:31:51.910734 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.912125 kubelet[2869]: E0114 01:31:51.912078 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.912125 kubelet[2869]: W0114 01:31:51.912095 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.912125 kubelet[2869]: E0114 01:31:51.912108 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.912462 kubelet[2869]: I0114 01:31:51.912438 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a92d2670-8bc7-4318-8d73-b12be2d0a45e-registration-dir\") pod \"csi-node-driver-9jt56\" (UID: \"a92d2670-8bc7-4318-8d73-b12be2d0a45e\") " pod="calico-system/csi-node-driver-9jt56" Jan 14 01:31:51.913076 kubelet[2869]: E0114 01:31:51.912868 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.913271 kubelet[2869]: W0114 01:31:51.912885 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.913342 kubelet[2869]: E0114 01:31:51.913328 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.915836 kubelet[2869]: E0114 01:31:51.915816 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.916560 kubelet[2869]: W0114 01:31:51.916139 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.916560 kubelet[2869]: E0114 01:31:51.916162 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.917233 kubelet[2869]: E0114 01:31:51.917217 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.917346 kubelet[2869]: W0114 01:31:51.917328 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.917426 kubelet[2869]: E0114 01:31:51.917410 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.918172 kubelet[2869]: I0114 01:31:51.918112 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a92d2670-8bc7-4318-8d73-b12be2d0a45e-socket-dir\") pod \"csi-node-driver-9jt56\" (UID: \"a92d2670-8bc7-4318-8d73-b12be2d0a45e\") " pod="calico-system/csi-node-driver-9jt56" Jan 14 01:31:51.918310 kubelet[2869]: E0114 01:31:51.918293 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.918411 kubelet[2869]: W0114 01:31:51.918376 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.918411 kubelet[2869]: E0114 01:31:51.918395 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.919813 kubelet[2869]: E0114 01:31:51.919794 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.920230 kubelet[2869]: W0114 01:31:51.919885 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.920327 kubelet[2869]: E0114 01:31:51.920303 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.921390 kubelet[2869]: E0114 01:31:51.921336 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.921390 kubelet[2869]: W0114 01:31:51.921356 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.921390 kubelet[2869]: E0114 01:31:51.921371 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.922331 kubelet[2869]: E0114 01:31:51.922286 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.922331 kubelet[2869]: W0114 01:31:51.922302 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.922331 kubelet[2869]: E0114 01:31:51.922315 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.924234 kubelet[2869]: E0114 01:31:51.924062 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.924234 kubelet[2869]: W0114 01:31:51.924081 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.924234 kubelet[2869]: E0114 01:31:51.924097 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.926102 kubelet[2869]: E0114 01:31:51.926084 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.926494 kubelet[2869]: W0114 01:31:51.926294 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.926494 kubelet[2869]: E0114 01:31:51.926318 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.927513 kubelet[2869]: E0114 01:31:51.927494 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.927614 kubelet[2869]: W0114 01:31:51.927598 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.927735 kubelet[2869]: E0114 01:31:51.927673 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.928267 kubelet[2869]: E0114 01:31:51.928249 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.928416 kubelet[2869]: W0114 01:31:51.928333 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.928416 kubelet[2869]: E0114 01:31:51.928353 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.930318 kubelet[2869]: E0114 01:31:51.930297 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:51.930503 kubelet[2869]: W0114 01:31:51.930386 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:51.930503 kubelet[2869]: E0114 01:31:51.930405 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:51.942402 kubelet[2869]: E0114 01:31:51.942358 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:51.944506 containerd[1601]: time="2026-01-14T01:31:51.944444307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pgr6m,Uid:912ee544-2476-46ca-a747-1f192d5a1d61,Namespace:calico-system,Attempt:0,}" Jan 14 01:31:51.949000 audit: BPF prog-id=149 op=LOAD Jan 14 01:31:51.956000 audit: BPF prog-id=150 op=LOAD Jan 14 01:31:51.956000 audit[3354]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3311 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:51.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663938333839353533303334613263613337643930303261346135 Jan 14 01:31:51.956000 audit: BPF prog-id=150 op=UNLOAD Jan 14 01:31:51.956000 audit[3354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:51.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663938333839353533303334613263613337643930303261346135 Jan 14 01:31:51.956000 audit: BPF prog-id=151 op=LOAD Jan 14 01:31:51.956000 audit[3354]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3311 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:51.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663938333839353533303334613263613337643930303261346135 Jan 14 01:31:51.956000 audit: BPF prog-id=152 op=LOAD Jan 14 01:31:51.956000 audit[3354]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3311 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:51.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663938333839353533303334613263613337643930303261346135 Jan 14 01:31:51.957000 audit: BPF prog-id=152 op=UNLOAD Jan 14 01:31:51.957000 audit[3354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:51.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663938333839353533303334613263613337643930303261346135 Jan 14 01:31:51.957000 audit: BPF prog-id=151 op=UNLOAD Jan 14 01:31:51.957000 audit[3354]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:51.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663938333839353533303334613263613337643930303261346135 Jan 14 01:31:51.957000 audit: BPF prog-id=153 op=LOAD Jan 14 01:31:51.957000 audit[3354]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3311 pid=3354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:51.957000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663938333839353533303334613263613337643930303261346135 Jan 14 01:31:52.032580 kubelet[2869]: E0114 01:31:52.032446 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.032580 kubelet[2869]: W0114 01:31:52.032489 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.032580 kubelet[2869]: E0114 01:31:52.032529 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.034610 kubelet[2869]: I0114 01:31:52.032777 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7hrq\" (UniqueName: \"kubernetes.io/projected/a92d2670-8bc7-4318-8d73-b12be2d0a45e-kube-api-access-g7hrq\") pod \"csi-node-driver-9jt56\" (UID: \"a92d2670-8bc7-4318-8d73-b12be2d0a45e\") " pod="calico-system/csi-node-driver-9jt56" Jan 14 01:31:52.037223 kubelet[2869]: E0114 01:31:52.036376 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.037223 kubelet[2869]: W0114 01:31:52.036761 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.037223 kubelet[2869]: E0114 01:31:52.036783 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.058470 kubelet[2869]: E0114 01:31:52.052304 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.058470 kubelet[2869]: W0114 01:31:52.052413 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.058470 kubelet[2869]: E0114 01:31:52.052721 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.058470 kubelet[2869]: E0114 01:31:52.057296 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.058470 kubelet[2869]: W0114 01:31:52.057329 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.058470 kubelet[2869]: E0114 01:31:52.057361 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.060343 kubelet[2869]: E0114 01:31:52.059298 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.060343 kubelet[2869]: W0114 01:31:52.059313 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.060343 kubelet[2869]: E0114 01:31:52.059335 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.062300 kubelet[2869]: E0114 01:31:52.061281 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.062300 kubelet[2869]: W0114 01:31:52.061303 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.062300 kubelet[2869]: E0114 01:31:52.061321 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.063871 kubelet[2869]: E0114 01:31:52.063754 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.063871 kubelet[2869]: W0114 01:31:52.063831 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.063871 kubelet[2869]: E0114 01:31:52.063851 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.066431 kubelet[2869]: E0114 01:31:52.066308 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.066431 kubelet[2869]: W0114 01:31:52.066383 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.066431 kubelet[2869]: E0114 01:31:52.066401 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.094148 containerd[1601]: time="2026-01-14T01:31:52.089860628Z" level=info msg="connecting to shim be12c9fab17abf1001454fc9a83e4f32448708ce57ffcbbcc235c5d97ec7841e" address="unix:///run/containerd/s/90901150b335529d026808932b08feeab60441059faa030dca6b8ba96c724879" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:31:52.102264 kubelet[2869]: E0114 01:31:52.102198 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.102264 kubelet[2869]: W0114 01:31:52.102227 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.102264 kubelet[2869]: E0114 01:31:52.102257 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.114667 kubelet[2869]: E0114 01:31:52.114225 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.114667 kubelet[2869]: W0114 01:31:52.114359 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.114667 kubelet[2869]: E0114 01:31:52.114629 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.120517 kubelet[2869]: E0114 01:31:52.120444 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.120517 kubelet[2869]: W0114 01:31:52.120463 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.120517 kubelet[2869]: E0114 01:31:52.120487 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.122864 kubelet[2869]: E0114 01:31:52.122696 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.122864 kubelet[2869]: W0114 01:31:52.122857 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.123154 kubelet[2869]: E0114 01:31:52.122877 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.128018 kubelet[2869]: E0114 01:31:52.125545 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.128018 kubelet[2869]: W0114 01:31:52.125567 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.128018 kubelet[2869]: E0114 01:31:52.125580 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.128018 kubelet[2869]: I0114 01:31:52.127236 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a92d2670-8bc7-4318-8d73-b12be2d0a45e-varrun\") pod \"csi-node-driver-9jt56\" (UID: \"a92d2670-8bc7-4318-8d73-b12be2d0a45e\") " pod="calico-system/csi-node-driver-9jt56" Jan 14 01:31:52.128018 kubelet[2869]: E0114 01:31:52.127539 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.128018 kubelet[2869]: W0114 01:31:52.127750 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.128018 kubelet[2869]: E0114 01:31:52.127867 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.130840 kubelet[2869]: E0114 01:31:52.129442 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.130840 kubelet[2869]: W0114 01:31:52.129464 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.130840 kubelet[2869]: E0114 01:31:52.129570 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.131731 kubelet[2869]: E0114 01:31:52.131471 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.132683 kubelet[2869]: W0114 01:31:52.132481 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.132683 kubelet[2869]: E0114 01:31:52.132504 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.137165 kubelet[2869]: E0114 01:31:52.133331 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.137165 kubelet[2869]: W0114 01:31:52.133414 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.137165 kubelet[2869]: E0114 01:31:52.133443 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.153705 kubelet[2869]: E0114 01:31:52.144493 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.153705 kubelet[2869]: W0114 01:31:52.145103 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.153705 kubelet[2869]: E0114 01:31:52.145427 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.163125 kubelet[2869]: E0114 01:31:52.162851 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.178043 kubelet[2869]: W0114 01:31:52.172722 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.179833 kubelet[2869]: E0114 01:31:52.178836 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.184417 kubelet[2869]: E0114 01:31:52.184325 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.184417 kubelet[2869]: W0114 01:31:52.184409 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.184528 kubelet[2869]: E0114 01:31:52.184439 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.185629 kubelet[2869]: E0114 01:31:52.185421 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.185629 kubelet[2869]: W0114 01:31:52.185497 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.185629 kubelet[2869]: E0114 01:31:52.185519 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.240457 kubelet[2869]: E0114 01:31:52.240219 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.240457 kubelet[2869]: W0114 01:31:52.240305 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.240457 kubelet[2869]: E0114 01:31:52.240396 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.242421 kubelet[2869]: E0114 01:31:52.242247 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.242421 kubelet[2869]: W0114 01:31:52.242322 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.242421 kubelet[2869]: E0114 01:31:52.242346 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.257687 kubelet[2869]: E0114 01:31:52.257474 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.260495 kubelet[2869]: W0114 01:31:52.258184 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.260495 kubelet[2869]: E0114 01:31:52.258655 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.261434 kubelet[2869]: E0114 01:31:52.261413 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.261738 kubelet[2869]: W0114 01:31:52.261433 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.261738 kubelet[2869]: E0114 01:31:52.261456 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.267284 kubelet[2869]: E0114 01:31:52.267146 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.267284 kubelet[2869]: W0114 01:31:52.267220 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.267397 kubelet[2869]: E0114 01:31:52.267357 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.269782 kubelet[2869]: E0114 01:31:52.269682 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.269782 kubelet[2869]: W0114 01:31:52.269754 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.269782 kubelet[2869]: E0114 01:31:52.269778 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.271498 kubelet[2869]: E0114 01:31:52.271417 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.271675 kubelet[2869]: W0114 01:31:52.271586 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.271878 kubelet[2869]: E0114 01:31:52.271814 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.273050 systemd[1]: Started cri-containerd-be12c9fab17abf1001454fc9a83e4f32448708ce57ffcbbcc235c5d97ec7841e.scope - libcontainer container be12c9fab17abf1001454fc9a83e4f32448708ce57ffcbbcc235c5d97ec7841e. Jan 14 01:31:52.276654 kubelet[2869]: E0114 01:31:52.275818 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.276654 kubelet[2869]: W0114 01:31:52.275836 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.276654 kubelet[2869]: E0114 01:31:52.275850 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.279161 kubelet[2869]: E0114 01:31:52.278736 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.279161 kubelet[2869]: W0114 01:31:52.278756 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.279161 kubelet[2869]: E0114 01:31:52.278772 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.281210 kubelet[2869]: E0114 01:31:52.280873 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.282692 kubelet[2869]: W0114 01:31:52.282668 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.283228 kubelet[2869]: E0114 01:31:52.282858 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.327279 kubelet[2869]: E0114 01:31:52.324690 2869 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:31:52.331573 kubelet[2869]: W0114 01:31:52.331246 2869 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:31:52.333090 kubelet[2869]: E0114 01:31:52.332515 2869 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:31:52.340545 containerd[1601]: time="2026-01-14T01:31:52.340402301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dd644c869-hmddh,Uid:36eed115-691e-460d-a5cd-d20b4b03398f,Namespace:calico-system,Attempt:0,} returns sandbox id \"89f98389553034a2ca37d9002a4a5008db68b044330270b799f871b4f94953a6\"" Jan 14 01:31:52.344000 audit: BPF prog-id=154 op=LOAD Jan 14 01:31:52.351553 kernel: kauditd_printk_skb: 42 callbacks suppressed Jan 14 01:31:52.351623 kernel: audit: type=1334 audit(1768354312.344:530): prog-id=154 op=LOAD Jan 14 01:31:52.363690 kubelet[2869]: E0114 01:31:52.363397 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:52.358000 audit[3489]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:52.358000 audit[3489]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd8fb5b8e0 a2=0 a3=7ffd8fb5b8cc items=0 ppid=3030 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.388350 containerd[1601]: time="2026-01-14T01:31:52.386239211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:31:52.400612 kernel: audit: type=1325 audit(1768354312.358:531): table=filter:119 family=2 entries=22 op=nft_register_rule pid=3489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:52.400686 kernel: audit: type=1300 audit(1768354312.358:531): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd8fb5b8e0 a2=0 a3=7ffd8fb5b8cc items=0 ppid=3030 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.400803 kernel: audit: type=1327 audit(1768354312.358:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:52.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:52.411201 kernel: audit: type=1334 audit(1768354312.365:532): prog-id=155 op=LOAD Jan 14 01:31:52.365000 audit: BPF prog-id=155 op=LOAD Jan 14 01:31:52.365000 audit[3447]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3425 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.437177 kernel: audit: type=1300 audit(1768354312.365:532): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3425 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.437283 kernel: audit: type=1327 audit(1768354312.365:532): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265313263396661623137616266313030313435346663396138336534 Jan 14 01:31:52.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265313263396661623137616266313030313435346663396138336534 Jan 14 01:31:52.365000 audit: BPF prog-id=155 op=UNLOAD Jan 14 01:31:52.467868 kernel: audit: type=1334 audit(1768354312.365:533): prog-id=155 op=UNLOAD Jan 14 01:31:52.365000 audit[3447]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.490606 kernel: audit: type=1300 audit(1768354312.365:533): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.496682 kernel: audit: type=1327 audit(1768354312.365:533): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265313263396661623137616266313030313435346663396138336534 Jan 14 01:31:52.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265313263396661623137616266313030313435346663396138336534 Jan 14 01:31:52.365000 audit: BPF prog-id=156 op=LOAD Jan 14 01:31:52.365000 audit[3447]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3425 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265313263396661623137616266313030313435346663396138336534 Jan 14 01:31:52.365000 audit: BPF prog-id=157 op=LOAD Jan 14 01:31:52.365000 audit[3447]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3425 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265313263396661623137616266313030313435346663396138336534 Jan 14 01:31:52.365000 audit: BPF prog-id=157 op=UNLOAD Jan 14 01:31:52.365000 audit[3447]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265313263396661623137616266313030313435346663396138336534 Jan 14 01:31:52.365000 audit: BPF prog-id=156 op=UNLOAD Jan 14 01:31:52.365000 audit[3447]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265313263396661623137616266313030313435346663396138336534 Jan 14 01:31:52.365000 audit: BPF prog-id=158 op=LOAD Jan 14 01:31:52.365000 audit[3447]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3425 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265313263396661623137616266313030313435346663396138336534 Jan 14 01:31:52.377000 audit[3489]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:52.377000 audit[3489]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8fb5b8e0 a2=0 a3=0 items=0 ppid=3030 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:52.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:52.520651 containerd[1601]: time="2026-01-14T01:31:52.520548946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pgr6m,Uid:912ee544-2476-46ca-a747-1f192d5a1d61,Namespace:calico-system,Attempt:0,} returns sandbox id \"be12c9fab17abf1001454fc9a83e4f32448708ce57ffcbbcc235c5d97ec7841e\"" Jan 14 01:31:52.524642 kubelet[2869]: E0114 01:31:52.524342 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:53.709315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount859211614.mount: Deactivated successfully. Jan 14 01:31:54.100394 kubelet[2869]: E0114 01:31:54.100144 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:31:55.861435 containerd[1601]: time="2026-01-14T01:31:55.860561997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:55.896641 containerd[1601]: time="2026-01-14T01:31:55.876283790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35231371" Jan 14 01:31:55.897635 containerd[1601]: time="2026-01-14T01:31:55.897497661Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:55.900764 containerd[1601]: time="2026-01-14T01:31:55.900575718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:55.901700 containerd[1601]: time="2026-01-14T01:31:55.901561098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.515277615s" Jan 14 01:31:55.901700 containerd[1601]: time="2026-01-14T01:31:55.901657688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:31:55.912359 containerd[1601]: time="2026-01-14T01:31:55.909683658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:31:55.963459 containerd[1601]: time="2026-01-14T01:31:55.963154965Z" level=info msg="CreateContainer within sandbox \"89f98389553034a2ca37d9002a4a5008db68b044330270b799f871b4f94953a6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:31:55.974781 containerd[1601]: time="2026-01-14T01:31:55.974638882Z" level=info msg="Container fef2bbe30888665dc2755d47c0bede4a73fbfbf6809e0105dd81311fb6b22efe: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:31:55.985683 containerd[1601]: time="2026-01-14T01:31:55.985598209Z" level=info msg="CreateContainer within sandbox \"89f98389553034a2ca37d9002a4a5008db68b044330270b799f871b4f94953a6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fef2bbe30888665dc2755d47c0bede4a73fbfbf6809e0105dd81311fb6b22efe\"" Jan 14 01:31:55.986653 containerd[1601]: time="2026-01-14T01:31:55.986596232Z" level=info msg="StartContainer for \"fef2bbe30888665dc2755d47c0bede4a73fbfbf6809e0105dd81311fb6b22efe\"" Jan 14 01:31:55.989406 containerd[1601]: time="2026-01-14T01:31:55.989164113Z" level=info msg="connecting to shim fef2bbe30888665dc2755d47c0bede4a73fbfbf6809e0105dd81311fb6b22efe" address="unix:///run/containerd/s/0acc7ff2ee6b320211f1200969d174b0f7a6d98a36a2cc5aeb7e93e7ab7e0e9c" protocol=ttrpc version=3 Jan 14 01:31:56.030268 systemd[1]: Started cri-containerd-fef2bbe30888665dc2755d47c0bede4a73fbfbf6809e0105dd81311fb6b22efe.scope - libcontainer container fef2bbe30888665dc2755d47c0bede4a73fbfbf6809e0105dd81311fb6b22efe. Jan 14 01:31:56.074000 audit: BPF prog-id=159 op=LOAD Jan 14 01:31:56.075000 audit: BPF prog-id=160 op=LOAD Jan 14 01:31:56.075000 audit[3506]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3311 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663262626533303838383636356463323735356434376330626564 Jan 14 01:31:56.075000 audit: BPF prog-id=160 op=UNLOAD Jan 14 01:31:56.075000 audit[3506]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663262626533303838383636356463323735356434376330626564 Jan 14 01:31:56.075000 audit: BPF prog-id=161 op=LOAD Jan 14 01:31:56.075000 audit[3506]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3311 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663262626533303838383636356463323735356434376330626564 Jan 14 01:31:56.076000 audit: BPF prog-id=162 op=LOAD Jan 14 01:31:56.076000 audit[3506]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3311 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663262626533303838383636356463323735356434376330626564 Jan 14 01:31:56.076000 audit: BPF prog-id=162 op=UNLOAD Jan 14 01:31:56.076000 audit[3506]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663262626533303838383636356463323735356434376330626564 Jan 14 01:31:56.076000 audit: BPF prog-id=161 op=UNLOAD Jan 14 01:31:56.076000 audit[3506]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3311 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663262626533303838383636356463323735356434376330626564 Jan 14 01:31:56.076000 audit: BPF prog-id=163 op=LOAD Jan 14 01:31:56.076000 audit[3506]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3311 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665663262626533303838383636356463323735356434376330626564 Jan 14 01:31:56.108705 kubelet[2869]: E0114 01:31:56.105773 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:31:56.185998 containerd[1601]: time="2026-01-14T01:31:56.183622237Z" level=info msg="StartContainer for \"fef2bbe30888665dc2755d47c0bede4a73fbfbf6809e0105dd81311fb6b22efe\" returns successfully" Jan 14 01:31:56.682426 containerd[1601]: time="2026-01-14T01:31:56.682339662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:56.683835 containerd[1601]: time="2026-01-14T01:31:56.683542244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 01:31:56.685855 containerd[1601]: time="2026-01-14T01:31:56.685734436Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:56.689021 containerd[1601]: time="2026-01-14T01:31:56.688847702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:31:56.689738 containerd[1601]: time="2026-01-14T01:31:56.689422508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 779.645024ms" Jan 14 01:31:56.689738 containerd[1601]: time="2026-01-14T01:31:56.689492578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:31:56.696522 containerd[1601]: time="2026-01-14T01:31:56.696493655Z" level=info msg="CreateContainer within sandbox \"be12c9fab17abf1001454fc9a83e4f32448708ce57ffcbbcc235c5d97ec7841e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:31:56.710501 containerd[1601]: time="2026-01-14T01:31:56.710374462Z" level=info msg="Container 9c25d03675b92607e0ba666b7a584d74b2adbed3aa6304d0b706592677fd8afd: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:31:56.733275 containerd[1601]: time="2026-01-14T01:31:56.733054731Z" level=info msg="CreateContainer within sandbox \"be12c9fab17abf1001454fc9a83e4f32448708ce57ffcbbcc235c5d97ec7841e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9c25d03675b92607e0ba666b7a584d74b2adbed3aa6304d0b706592677fd8afd\"" Jan 14 01:31:56.734427 containerd[1601]: time="2026-01-14T01:31:56.734320637Z" level=info msg="StartContainer for \"9c25d03675b92607e0ba666b7a584d74b2adbed3aa6304d0b706592677fd8afd\"" Jan 14 01:31:56.738196 containerd[1601]: time="2026-01-14T01:31:56.738026578Z" level=info msg="connecting to shim 9c25d03675b92607e0ba666b7a584d74b2adbed3aa6304d0b706592677fd8afd" address="unix:///run/containerd/s/90901150b335529d026808932b08feeab60441059faa030dca6b8ba96c724879" protocol=ttrpc version=3 Jan 14 01:31:56.801296 systemd[1]: Started cri-containerd-9c25d03675b92607e0ba666b7a584d74b2adbed3aa6304d0b706592677fd8afd.scope - libcontainer container 9c25d03675b92607e0ba666b7a584d74b2adbed3aa6304d0b706592677fd8afd. Jan 14 01:31:56.900000 audit: BPF prog-id=164 op=LOAD Jan 14 01:31:56.900000 audit[3545]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3425 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963323564303336373562393236303765306261363636623761353834 Jan 14 01:31:56.900000 audit: BPF prog-id=165 op=LOAD Jan 14 01:31:56.900000 audit[3545]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3425 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963323564303336373562393236303765306261363636623761353834 Jan 14 01:31:56.900000 audit: BPF prog-id=165 op=UNLOAD Jan 14 01:31:56.900000 audit[3545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963323564303336373562393236303765306261363636623761353834 Jan 14 01:31:56.900000 audit: BPF prog-id=164 op=UNLOAD Jan 14 01:31:56.900000 audit[3545]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963323564303336373562393236303765306261363636623761353834 Jan 14 01:31:56.900000 audit: BPF prog-id=166 op=LOAD Jan 14 01:31:56.900000 audit[3545]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3425 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:56.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963323564303336373562393236303765306261363636623761353834 Jan 14 01:31:56.990635 containerd[1601]: time="2026-01-14T01:31:56.989408226Z" level=info msg="StartContainer for \"9c25d03675b92607e0ba666b7a584d74b2adbed3aa6304d0b706592677fd8afd\" returns successfully" Jan 14 01:31:57.010601 systemd[1]: cri-containerd-9c25d03675b92607e0ba666b7a584d74b2adbed3aa6304d0b706592677fd8afd.scope: Deactivated successfully. Jan 14 01:31:57.016000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:31:57.018400 containerd[1601]: time="2026-01-14T01:31:57.018316487Z" level=info msg="received container exit event container_id:\"9c25d03675b92607e0ba666b7a584d74b2adbed3aa6304d0b706592677fd8afd\" id:\"9c25d03675b92607e0ba666b7a584d74b2adbed3aa6304d0b706592677fd8afd\" pid:3557 exited_at:{seconds:1768354317 nanos:16080032}" Jan 14 01:31:57.090861 kubelet[2869]: E0114 01:31:57.090755 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:57.102154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c25d03675b92607e0ba666b7a584d74b2adbed3aa6304d0b706592677fd8afd-rootfs.mount: Deactivated successfully. Jan 14 01:31:57.132871 kubelet[2869]: E0114 01:31:57.132731 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:57.218644 kubelet[2869]: I0114 01:31:57.218500 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-dd644c869-hmddh" podStartSLOduration=2.6955883590000003 podStartE2EDuration="6.217881955s" podCreationTimestamp="2026-01-14 01:31:51 +0000 UTC" firstStartedPulling="2026-01-14 01:31:52.385426349 +0000 UTC m=+37.034697929" lastFinishedPulling="2026-01-14 01:31:55.907719944 +0000 UTC m=+40.556991525" observedRunningTime="2026-01-14 01:31:57.169821386 +0000 UTC m=+41.819092957" watchObservedRunningTime="2026-01-14 01:31:57.217881955 +0000 UTC m=+41.867153535" Jan 14 01:31:57.315000 audit[3599]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:57.315000 audit[3599]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff27f300c0 a2=0 a3=7fff27f300ac items=0 ppid=3030 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:57.315000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:57.328000 audit[3599]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=3599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:31:57.328000 audit[3599]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff27f300c0 a2=0 a3=7fff27f300ac items=0 ppid=3030 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:31:57.328000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:31:58.096048 kubelet[2869]: E0114 01:31:58.095650 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:31:58.112433 kubelet[2869]: E0114 01:31:58.112408 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:58.112621 kubelet[2869]: E0114 01:31:58.112443 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:31:58.113567 containerd[1601]: time="2026-01-14T01:31:58.113380908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:31:59.118651 kubelet[2869]: E0114 01:31:59.118160 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:00.106263 kubelet[2869]: E0114 01:32:00.105682 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:32:00.814442 containerd[1601]: time="2026-01-14T01:32:00.814309631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:32:00.816454 containerd[1601]: time="2026-01-14T01:32:00.816274066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 01:32:00.818106 containerd[1601]: time="2026-01-14T01:32:00.818066961Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:32:00.821127 containerd[1601]: time="2026-01-14T01:32:00.821077958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:32:00.822708 containerd[1601]: time="2026-01-14T01:32:00.822409942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.708982077s" Jan 14 01:32:00.822708 containerd[1601]: time="2026-01-14T01:32:00.822500190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:32:00.833345 containerd[1601]: time="2026-01-14T01:32:00.833192489Z" level=info msg="CreateContainer within sandbox \"be12c9fab17abf1001454fc9a83e4f32448708ce57ffcbbcc235c5d97ec7841e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:32:00.846803 containerd[1601]: time="2026-01-14T01:32:00.846739709Z" level=info msg="Container efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:32:00.859669 containerd[1601]: time="2026-01-14T01:32:00.859544774Z" level=info msg="CreateContainer within sandbox \"be12c9fab17abf1001454fc9a83e4f32448708ce57ffcbbcc235c5d97ec7841e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a\"" Jan 14 01:32:00.860582 containerd[1601]: time="2026-01-14T01:32:00.860554575Z" level=info msg="StartContainer for \"efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a\"" Jan 14 01:32:00.862479 containerd[1601]: time="2026-01-14T01:32:00.862394892Z" level=info msg="connecting to shim efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a" address="unix:///run/containerd/s/90901150b335529d026808932b08feeab60441059faa030dca6b8ba96c724879" protocol=ttrpc version=3 Jan 14 01:32:00.899232 systemd[1]: Started cri-containerd-efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a.scope - libcontainer container efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a. Jan 14 01:32:01.017000 audit: BPF prog-id=167 op=LOAD Jan 14 01:32:01.028549 kernel: kauditd_printk_skb: 62 callbacks suppressed Jan 14 01:32:01.028696 kernel: audit: type=1334 audit(1768354321.017:556): prog-id=167 op=LOAD Jan 14 01:32:01.028751 kernel: audit: type=1300 audit(1768354321.017:556): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3425 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:01.017000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3425 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:01.044646 kernel: audit: type=1327 audit(1768354321.017:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566626631353639633632366533353962623562326366623364633133 Jan 14 01:32:01.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566626631353639633632366533353962623562326366623364633133 Jan 14 01:32:01.017000 audit: BPF prog-id=168 op=LOAD Jan 14 01:32:01.065500 kernel: audit: type=1334 audit(1768354321.017:557): prog-id=168 op=LOAD Jan 14 01:32:01.065586 kernel: audit: type=1300 audit(1768354321.017:557): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3425 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:01.017000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3425 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:01.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566626631353639633632366533353962623562326366623364633133 Jan 14 01:32:01.095883 kernel: audit: type=1327 audit(1768354321.017:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566626631353639633632366533353962623562326366623364633133 Jan 14 01:32:01.096090 kernel: audit: type=1334 audit(1768354321.017:558): prog-id=168 op=UNLOAD Jan 14 01:32:01.017000 audit: BPF prog-id=168 op=UNLOAD Jan 14 01:32:01.017000 audit[3610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:01.121477 kernel: audit: type=1300 audit(1768354321.017:558): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:01.121680 kernel: audit: type=1327 audit(1768354321.017:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566626631353639633632366533353962623562326366623364633133 Jan 14 01:32:01.017000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566626631353639633632366533353962623562326366623364633133 Jan 14 01:32:01.018000 audit: BPF prog-id=167 op=UNLOAD Jan 14 01:32:01.145331 kernel: audit: type=1334 audit(1768354321.018:559): prog-id=167 op=UNLOAD Jan 14 01:32:01.018000 audit[3610]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:01.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566626631353639633632366533353962623562326366623364633133 Jan 14 01:32:01.018000 audit: BPF prog-id=169 op=LOAD Jan 14 01:32:01.018000 audit[3610]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3425 pid=3610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:01.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566626631353639633632366533353962623562326366623364633133 Jan 14 01:32:01.189029 containerd[1601]: time="2026-01-14T01:32:01.188736750Z" level=info msg="StartContainer for \"efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a\" returns successfully" Jan 14 01:32:02.195664 kubelet[2869]: E0114 01:32:02.186821 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:32:02.415336 kubelet[2869]: E0114 01:32:02.413876 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:03.138655 systemd[1]: cri-containerd-efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a.scope: Deactivated successfully. Jan 14 01:32:03.147000 audit: BPF prog-id=169 op=UNLOAD Jan 14 01:32:03.148371 systemd[1]: cri-containerd-efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a.scope: Consumed 1.772s CPU time, 177M memory peak, 4.4M read from disk, 171.3M written to disk. Jan 14 01:32:03.713424 containerd[1601]: time="2026-01-14T01:32:03.713347406Z" level=info msg="received container exit event container_id:\"efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a\" id:\"efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a\" pid:3623 exited_at:{seconds:1768354323 nanos:711578841}" Jan 14 01:32:10.889574 kubelet[2869]: I0114 01:32:10.882162 2869 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 01:32:10.889574 kubelet[2869]: E0114 01:32:10.889298 2869 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.725s" Jan 14 01:32:11.536196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efbf1569c626e359bb5b2cfb3dc13f96edbbb38f1844ba502a3909551b57ca1a-rootfs.mount: Deactivated successfully. Jan 14 01:32:13.400785 kubelet[2869]: E0114 01:32:13.399862 2869 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.51s" Jan 14 01:32:15.916813 kubelet[2869]: E0114 01:32:15.915271 2869 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.511s" Jan 14 01:32:17.144115 kubelet[2869]: E0114 01:32:17.143552 2869 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.225s" Jan 14 01:32:18.032374 kubelet[2869]: I0114 01:32:18.030744 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6bs\" (UniqueName: \"kubernetes.io/projected/35648de2-563a-403b-bdd1-f0409de12a27-kube-api-access-nj6bs\") pod \"calico-kube-controllers-546579f487-48d5w\" (UID: \"35648de2-563a-403b-bdd1-f0409de12a27\") " pod="calico-system/calico-kube-controllers-546579f487-48d5w" Jan 14 01:32:18.032374 kubelet[2869]: I0114 01:32:18.031137 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35648de2-563a-403b-bdd1-f0409de12a27-tigera-ca-bundle\") pod \"calico-kube-controllers-546579f487-48d5w\" (UID: \"35648de2-563a-403b-bdd1-f0409de12a27\") " pod="calico-system/calico-kube-controllers-546579f487-48d5w" Jan 14 01:32:18.037884 kubelet[2869]: E0114 01:32:18.037735 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:18.290871 kubelet[2869]: I0114 01:32:18.289306 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-656rr\" (UniqueName: \"kubernetes.io/projected/277e32d3-813e-4a52-82ac-39307655fe89-kube-api-access-656rr\") pod \"coredns-674b8bbfcf-dcn4k\" (UID: \"277e32d3-813e-4a52-82ac-39307655fe89\") " pod="kube-system/coredns-674b8bbfcf-dcn4k" Jan 14 01:32:18.290871 kubelet[2869]: I0114 01:32:18.289601 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/277e32d3-813e-4a52-82ac-39307655fe89-config-volume\") pod \"coredns-674b8bbfcf-dcn4k\" (UID: \"277e32d3-813e-4a52-82ac-39307655fe89\") " pod="kube-system/coredns-674b8bbfcf-dcn4k" Jan 14 01:32:18.295017 containerd[1601]: time="2026-01-14T01:32:18.292392494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:32:18.521478 systemd[1]: Created slice kubepods-besteffort-poda92d2670_8bc7_4318_8d73_b12be2d0a45e.slice - libcontainer container kubepods-besteffort-poda92d2670_8bc7_4318_8d73_b12be2d0a45e.slice. Jan 14 01:32:20.328293 kubelet[2869]: I0114 01:32:20.314871 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f5rs\" (UniqueName: \"kubernetes.io/projected/c32ecf43-33bb-4f07-8af2-75af73cd7967-kube-api-access-9f5rs\") pod \"calico-apiserver-77c46b477-wkc27\" (UID: \"c32ecf43-33bb-4f07-8af2-75af73cd7967\") " pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" Jan 14 01:32:20.781276 kubelet[2869]: I0114 01:32:20.780154 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b77f400e-9c3a-49b0-abc0-b3cf084634d3-whisker-ca-bundle\") pod \"whisker-659b5bb58d-6tgvx\" (UID: \"b77f400e-9c3a-49b0-abc0-b3cf084634d3\") " pod="calico-system/whisker-659b5bb58d-6tgvx" Jan 14 01:32:25.104820 kubelet[2869]: I0114 01:32:25.096473 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b77f400e-9c3a-49b0-abc0-b3cf084634d3-whisker-backend-key-pair\") pod \"whisker-659b5bb58d-6tgvx\" (UID: \"b77f400e-9c3a-49b0-abc0-b3cf084634d3\") " pod="calico-system/whisker-659b5bb58d-6tgvx" Jan 14 01:32:29.995583 kubelet[2869]: I0114 01:32:29.995197 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c72m2\" (UniqueName: \"kubernetes.io/projected/b77f400e-9c3a-49b0-abc0-b3cf084634d3-kube-api-access-c72m2\") pod \"whisker-659b5bb58d-6tgvx\" (UID: \"b77f400e-9c3a-49b0-abc0-b3cf084634d3\") " pod="calico-system/whisker-659b5bb58d-6tgvx" Jan 14 01:32:32.550213 kubelet[2869]: I0114 01:32:32.549738 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c32ecf43-33bb-4f07-8af2-75af73cd7967-calico-apiserver-certs\") pod \"calico-apiserver-77c46b477-wkc27\" (UID: \"c32ecf43-33bb-4f07-8af2-75af73cd7967\") " pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" Jan 14 01:32:32.582014 systemd[1]: Created slice kubepods-besteffort-pod35648de2_563a_403b_bdd1_f0409de12a27.slice - libcontainer container kubepods-besteffort-pod35648de2_563a_403b_bdd1_f0409de12a27.slice. Jan 14 01:32:32.597882 containerd[1601]: time="2026-01-14T01:32:32.596329025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jt56,Uid:a92d2670-8bc7-4318-8d73-b12be2d0a45e,Namespace:calico-system,Attempt:0,}" Jan 14 01:32:32.702688 containerd[1601]: time="2026-01-14T01:32:32.701214023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546579f487-48d5w,Uid:35648de2-563a-403b-bdd1-f0409de12a27,Namespace:calico-system,Attempt:0,}" Jan 14 01:32:32.704439 kubelet[2869]: E0114 01:32:32.704299 2869 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.516s" Jan 14 01:32:32.742825 systemd[1]: Created slice kubepods-besteffort-podb77f400e_9c3a_49b0_abc0_b3cf084634d3.slice - libcontainer container kubepods-besteffort-podb77f400e_9c3a_49b0_abc0_b3cf084634d3.slice. Jan 14 01:32:32.773270 kubelet[2869]: I0114 01:32:32.773048 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw6gf\" (UniqueName: \"kubernetes.io/projected/898e39c7-945e-4928-a3eb-790aff1d14eb-kube-api-access-tw6gf\") pod \"coredns-674b8bbfcf-68nvj\" (UID: \"898e39c7-945e-4928-a3eb-790aff1d14eb\") " pod="kube-system/coredns-674b8bbfcf-68nvj" Jan 14 01:32:32.776442 kubelet[2869]: I0114 01:32:32.776414 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/898e39c7-945e-4928-a3eb-790aff1d14eb-config-volume\") pod \"coredns-674b8bbfcf-68nvj\" (UID: \"898e39c7-945e-4928-a3eb-790aff1d14eb\") " pod="kube-system/coredns-674b8bbfcf-68nvj" Jan 14 01:32:32.829435 systemd[1]: Created slice kubepods-burstable-pod898e39c7_945e_4928_a3eb_790aff1d14eb.slice - libcontainer container kubepods-burstable-pod898e39c7_945e_4928_a3eb_790aff1d14eb.slice. Jan 14 01:32:32.890762 kubelet[2869]: I0114 01:32:32.886073 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1821a0db-e895-49f0-8081-ae8dd6cf61e7-goldmane-key-pair\") pod \"goldmane-666569f655-5vwfg\" (UID: \"1821a0db-e895-49f0-8081-ae8dd6cf61e7\") " pod="calico-system/goldmane-666569f655-5vwfg" Jan 14 01:32:32.900830 kubelet[2869]: I0114 01:32:32.900552 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1821a0db-e895-49f0-8081-ae8dd6cf61e7-config\") pod \"goldmane-666569f655-5vwfg\" (UID: \"1821a0db-e895-49f0-8081-ae8dd6cf61e7\") " pod="calico-system/goldmane-666569f655-5vwfg" Jan 14 01:32:32.900830 kubelet[2869]: I0114 01:32:32.900786 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x25n9\" (UniqueName: \"kubernetes.io/projected/1821a0db-e895-49f0-8081-ae8dd6cf61e7-kube-api-access-x25n9\") pod \"goldmane-666569f655-5vwfg\" (UID: \"1821a0db-e895-49f0-8081-ae8dd6cf61e7\") " pod="calico-system/goldmane-666569f655-5vwfg" Jan 14 01:32:32.901487 kubelet[2869]: I0114 01:32:32.900847 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1821a0db-e895-49f0-8081-ae8dd6cf61e7-goldmane-ca-bundle\") pod \"goldmane-666569f655-5vwfg\" (UID: \"1821a0db-e895-49f0-8081-ae8dd6cf61e7\") " pod="calico-system/goldmane-666569f655-5vwfg" Jan 14 01:32:32.921867 systemd[1]: Created slice kubepods-besteffort-podc32ecf43_33bb_4f07_8af2_75af73cd7967.slice - libcontainer container kubepods-besteffort-podc32ecf43_33bb_4f07_8af2_75af73cd7967.slice. Jan 14 01:32:32.933404 kubelet[2869]: E0114 01:32:32.932227 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:32.935069 systemd[1]: Created slice kubepods-burstable-pod277e32d3_813e_4a52_82ac_39307655fe89.slice - libcontainer container kubepods-burstable-pod277e32d3_813e_4a52_82ac_39307655fe89.slice. Jan 14 01:32:32.955714 kubelet[2869]: E0114 01:32:32.955667 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:32.959688 containerd[1601]: time="2026-01-14T01:32:32.958495908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c46b477-wkc27,Uid:c32ecf43-33bb-4f07-8af2-75af73cd7967,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:32:32.969286 containerd[1601]: time="2026-01-14T01:32:32.969015802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcn4k,Uid:277e32d3-813e-4a52-82ac-39307655fe89,Namespace:kube-system,Attempt:0,}" Jan 14 01:32:32.990594 systemd[1]: Created slice kubepods-besteffort-pod1821a0db_e895_49f0_8081_ae8dd6cf61e7.slice - libcontainer container kubepods-besteffort-pod1821a0db_e895_49f0_8081_ae8dd6cf61e7.slice. Jan 14 01:32:33.005413 systemd[1]: Created slice kubepods-besteffort-podff2a83bd_ca30_4810_bc00_617909aaca25.slice - libcontainer container kubepods-besteffort-podff2a83bd_ca30_4810_bc00_617909aaca25.slice. Jan 14 01:32:33.168083 containerd[1601]: time="2026-01-14T01:32:33.087883585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659b5bb58d-6tgvx,Uid:b77f400e-9c3a-49b0-abc0-b3cf084634d3,Namespace:calico-system,Attempt:0,}" Jan 14 01:32:33.187011 kubelet[2869]: I0114 01:32:33.183498 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff2a83bd-ca30-4810-bc00-617909aaca25-calico-apiserver-certs\") pod \"calico-apiserver-77c46b477-q67mc\" (UID: \"ff2a83bd-ca30-4810-bc00-617909aaca25\") " pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" Jan 14 01:32:33.187011 kubelet[2869]: I0114 01:32:33.184243 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47q8q\" (UniqueName: \"kubernetes.io/projected/ff2a83bd-ca30-4810-bc00-617909aaca25-kube-api-access-47q8q\") pod \"calico-apiserver-77c46b477-q67mc\" (UID: \"ff2a83bd-ca30-4810-bc00-617909aaca25\") " pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" Jan 14 01:32:33.189418 kubelet[2869]: E0114 01:32:33.188626 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:33.195802 containerd[1601]: time="2026-01-14T01:32:33.195070514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-68nvj,Uid:898e39c7-945e-4928-a3eb-790aff1d14eb,Namespace:kube-system,Attempt:0,}" Jan 14 01:32:33.332308 containerd[1601]: time="2026-01-14T01:32:33.331327067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5vwfg,Uid:1821a0db-e895-49f0-8081-ae8dd6cf61e7,Namespace:calico-system,Attempt:0,}" Jan 14 01:32:33.626192 containerd[1601]: time="2026-01-14T01:32:33.625029867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c46b477-q67mc,Uid:ff2a83bd-ca30-4810-bc00-617909aaca25,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:32:33.758779 containerd[1601]: time="2026-01-14T01:32:33.758621034Z" level=error msg="Failed to destroy network for sandbox \"e55ed4dbdb8afff7ed84769f553ef8bf323e75c70dd2b93c6e22997f2df790fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.764092 systemd[1]: run-netns-cni\x2d6d2e5c2a\x2d6023\x2dc117\x2d3fd3\x2db7561dc5690c.mount: Deactivated successfully. Jan 14 01:32:33.792267 containerd[1601]: time="2026-01-14T01:32:33.791309387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jt56,Uid:a92d2670-8bc7-4318-8d73-b12be2d0a45e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55ed4dbdb8afff7ed84769f553ef8bf323e75c70dd2b93c6e22997f2df790fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.795250 kubelet[2869]: E0114 01:32:33.792742 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55ed4dbdb8afff7ed84769f553ef8bf323e75c70dd2b93c6e22997f2df790fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.795250 kubelet[2869]: E0114 01:32:33.793009 2869 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55ed4dbdb8afff7ed84769f553ef8bf323e75c70dd2b93c6e22997f2df790fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jt56" Jan 14 01:32:33.795250 kubelet[2869]: E0114 01:32:33.793303 2869 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55ed4dbdb8afff7ed84769f553ef8bf323e75c70dd2b93c6e22997f2df790fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jt56" Jan 14 01:32:33.797505 kubelet[2869]: E0114 01:32:33.793418 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e55ed4dbdb8afff7ed84769f553ef8bf323e75c70dd2b93c6e22997f2df790fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:32:33.843166 containerd[1601]: time="2026-01-14T01:32:33.842856650Z" level=error msg="Failed to destroy network for sandbox \"337d34436a695d424991a354df3cc13d6e574722cc8b6bc80d60698ec9261393\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.843166 containerd[1601]: time="2026-01-14T01:32:33.843048518Z" level=error msg="Failed to destroy network for sandbox \"394210fc2d6316c8e8460e380530a0a272d935e95be55ce1dc0f2cb4fa3e81b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.850440 systemd[1]: run-netns-cni\x2d1454fb74\x2d35da\x2d8ee4\x2d6cf6\x2d98ca358c52dc.mount: Deactivated successfully. Jan 14 01:32:33.850705 systemd[1]: run-netns-cni\x2d49e1caf9\x2def3b\x2daad2\x2d0398\x2decef5eca2d34.mount: Deactivated successfully. Jan 14 01:32:33.852249 containerd[1601]: time="2026-01-14T01:32:33.851244498Z" level=error msg="Failed to destroy network for sandbox \"6a2bcc9f7afce6263d4fbc1388e20f6edc2e96201302a2a2c99d8111e8f40251\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.857285 systemd[1]: run-netns-cni\x2db43d07cc\x2df189\x2da5d2\x2dd983\x2dcee4b1ecc8c5.mount: Deactivated successfully. Jan 14 01:32:33.861842 containerd[1601]: time="2026-01-14T01:32:33.861241459Z" level=error msg="Failed to destroy network for sandbox \"323c2c75cdd5e3847e15f34cf1b212723c1db6d3b2600a4b2c15cfe8679da47c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.866005 containerd[1601]: time="2026-01-14T01:32:33.865230707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546579f487-48d5w,Uid:35648de2-563a-403b-bdd1-f0409de12a27,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"394210fc2d6316c8e8460e380530a0a272d935e95be55ce1dc0f2cb4fa3e81b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.867756 kubelet[2869]: E0114 01:32:33.867012 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394210fc2d6316c8e8460e380530a0a272d935e95be55ce1dc0f2cb4fa3e81b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.868209 kubelet[2869]: E0114 01:32:33.867862 2869 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394210fc2d6316c8e8460e380530a0a272d935e95be55ce1dc0f2cb4fa3e81b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-546579f487-48d5w" Jan 14 01:32:33.868606 kubelet[2869]: E0114 01:32:33.868513 2869 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394210fc2d6316c8e8460e380530a0a272d935e95be55ce1dc0f2cb4fa3e81b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-546579f487-48d5w" Jan 14 01:32:33.871412 kubelet[2869]: E0114 01:32:33.871268 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-546579f487-48d5w_calico-system(35648de2-563a-403b-bdd1-f0409de12a27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-546579f487-48d5w_calico-system(35648de2-563a-403b-bdd1-f0409de12a27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"394210fc2d6316c8e8460e380530a0a272d935e95be55ce1dc0f2cb4fa3e81b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:32:33.875225 containerd[1601]: time="2026-01-14T01:32:33.875172594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659b5bb58d-6tgvx,Uid:b77f400e-9c3a-49b0-abc0-b3cf084634d3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"337d34436a695d424991a354df3cc13d6e574722cc8b6bc80d60698ec9261393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.878869 kubelet[2869]: E0114 01:32:33.876311 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"337d34436a695d424991a354df3cc13d6e574722cc8b6bc80d60698ec9261393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.878869 kubelet[2869]: E0114 01:32:33.876390 2869 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"337d34436a695d424991a354df3cc13d6e574722cc8b6bc80d60698ec9261393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-659b5bb58d-6tgvx" Jan 14 01:32:33.878869 kubelet[2869]: E0114 01:32:33.876424 2869 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"337d34436a695d424991a354df3cc13d6e574722cc8b6bc80d60698ec9261393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-659b5bb58d-6tgvx" Jan 14 01:32:33.879296 containerd[1601]: time="2026-01-14T01:32:33.878376061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcn4k,Uid:277e32d3-813e-4a52-82ac-39307655fe89,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a2bcc9f7afce6263d4fbc1388e20f6edc2e96201302a2a2c99d8111e8f40251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.879453 kubelet[2869]: E0114 01:32:33.876495 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-659b5bb58d-6tgvx_calico-system(b77f400e-9c3a-49b0-abc0-b3cf084634d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-659b5bb58d-6tgvx_calico-system(b77f400e-9c3a-49b0-abc0-b3cf084634d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"337d34436a695d424991a354df3cc13d6e574722cc8b6bc80d60698ec9261393\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-659b5bb58d-6tgvx" podUID="b77f400e-9c3a-49b0-abc0-b3cf084634d3" Jan 14 01:32:33.883051 kubelet[2869]: E0114 01:32:33.882654 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a2bcc9f7afce6263d4fbc1388e20f6edc2e96201302a2a2c99d8111e8f40251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.883051 kubelet[2869]: E0114 01:32:33.882782 2869 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a2bcc9f7afce6263d4fbc1388e20f6edc2e96201302a2a2c99d8111e8f40251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dcn4k" Jan 14 01:32:33.883051 kubelet[2869]: E0114 01:32:33.882811 2869 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a2bcc9f7afce6263d4fbc1388e20f6edc2e96201302a2a2c99d8111e8f40251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dcn4k" Jan 14 01:32:33.884517 kubelet[2869]: E0114 01:32:33.884456 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dcn4k_kube-system(277e32d3-813e-4a52-82ac-39307655fe89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dcn4k_kube-system(277e32d3-813e-4a52-82ac-39307655fe89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a2bcc9f7afce6263d4fbc1388e20f6edc2e96201302a2a2c99d8111e8f40251\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dcn4k" podUID="277e32d3-813e-4a52-82ac-39307655fe89" Jan 14 01:32:33.885177 containerd[1601]: time="2026-01-14T01:32:33.885066890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c46b477-wkc27,Uid:c32ecf43-33bb-4f07-8af2-75af73cd7967,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"323c2c75cdd5e3847e15f34cf1b212723c1db6d3b2600a4b2c15cfe8679da47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.885857 kubelet[2869]: E0114 01:32:33.885620 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323c2c75cdd5e3847e15f34cf1b212723c1db6d3b2600a4b2c15cfe8679da47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.885857 kubelet[2869]: E0114 01:32:33.885662 2869 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323c2c75cdd5e3847e15f34cf1b212723c1db6d3b2600a4b2c15cfe8679da47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" Jan 14 01:32:33.885857 kubelet[2869]: E0114 01:32:33.885683 2869 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323c2c75cdd5e3847e15f34cf1b212723c1db6d3b2600a4b2c15cfe8679da47c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" Jan 14 01:32:33.886296 kubelet[2869]: E0114 01:32:33.885791 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c46b477-wkc27_calico-apiserver(c32ecf43-33bb-4f07-8af2-75af73cd7967)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c46b477-wkc27_calico-apiserver(c32ecf43-33bb-4f07-8af2-75af73cd7967)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"323c2c75cdd5e3847e15f34cf1b212723c1db6d3b2600a4b2c15cfe8679da47c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:32:33.925068 containerd[1601]: time="2026-01-14T01:32:33.924581985Z" level=error msg="Failed to destroy network for sandbox \"ff2f1133708508af016207bb0edfd05bbd907b5493b6b94d7a9a5c686e422daf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.928786 containerd[1601]: time="2026-01-14T01:32:33.927357230Z" level=error msg="Failed to destroy network for sandbox \"ef494d8cb6c265933c9dc84cb4ad7318bae72520f9bf3568c9a25c22911259c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.930718 containerd[1601]: time="2026-01-14T01:32:33.929831242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-68nvj,Uid:898e39c7-945e-4928-a3eb-790aff1d14eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff2f1133708508af016207bb0edfd05bbd907b5493b6b94d7a9a5c686e422daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.933741 kubelet[2869]: E0114 01:32:33.933598 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff2f1133708508af016207bb0edfd05bbd907b5493b6b94d7a9a5c686e422daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.935823 kubelet[2869]: E0114 01:32:33.935039 2869 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff2f1133708508af016207bb0edfd05bbd907b5493b6b94d7a9a5c686e422daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-68nvj" Jan 14 01:32:33.936173 kubelet[2869]: E0114 01:32:33.935833 2869 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff2f1133708508af016207bb0edfd05bbd907b5493b6b94d7a9a5c686e422daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-68nvj" Jan 14 01:32:33.936627 containerd[1601]: time="2026-01-14T01:32:33.935810849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5vwfg,Uid:1821a0db-e895-49f0-8081-ae8dd6cf61e7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef494d8cb6c265933c9dc84cb4ad7318bae72520f9bf3568c9a25c22911259c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.937222 kubelet[2869]: E0114 01:32:33.937080 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef494d8cb6c265933c9dc84cb4ad7318bae72520f9bf3568c9a25c22911259c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.937353 kubelet[2869]: E0114 01:32:33.937203 2869 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef494d8cb6c265933c9dc84cb4ad7318bae72520f9bf3568c9a25c22911259c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5vwfg" Jan 14 01:32:33.937462 kubelet[2869]: E0114 01:32:33.937296 2869 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef494d8cb6c265933c9dc84cb4ad7318bae72520f9bf3568c9a25c22911259c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5vwfg" Jan 14 01:32:33.937506 kubelet[2869]: E0114 01:32:33.937465 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5vwfg_calico-system(1821a0db-e895-49f0-8081-ae8dd6cf61e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5vwfg_calico-system(1821a0db-e895-49f0-8081-ae8dd6cf61e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef494d8cb6c265933c9dc84cb4ad7318bae72520f9bf3568c9a25c22911259c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:32:33.938077 kubelet[2869]: E0114 01:32:33.937879 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-68nvj_kube-system(898e39c7-945e-4928-a3eb-790aff1d14eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-68nvj_kube-system(898e39c7-945e-4928-a3eb-790aff1d14eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff2f1133708508af016207bb0edfd05bbd907b5493b6b94d7a9a5c686e422daf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-68nvj" podUID="898e39c7-945e-4928-a3eb-790aff1d14eb" Jan 14 01:32:33.977707 containerd[1601]: time="2026-01-14T01:32:33.977530025Z" level=error msg="Failed to destroy network for sandbox \"6bc15d3e55d38a8e6f5cf97ebc565c997d3f1ed9a010a8e8500041ffe0a99fa1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.982049 containerd[1601]: time="2026-01-14T01:32:33.981794564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c46b477-q67mc,Uid:ff2a83bd-ca30-4810-bc00-617909aaca25,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bc15d3e55d38a8e6f5cf97ebc565c997d3f1ed9a010a8e8500041ffe0a99fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.982482 kubelet[2869]: E0114 01:32:33.982315 2869 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bc15d3e55d38a8e6f5cf97ebc565c997d3f1ed9a010a8e8500041ffe0a99fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:32:33.982482 kubelet[2869]: E0114 01:32:33.982392 2869 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bc15d3e55d38a8e6f5cf97ebc565c997d3f1ed9a010a8e8500041ffe0a99fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" Jan 14 01:32:33.982482 kubelet[2869]: E0114 01:32:33.982423 2869 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bc15d3e55d38a8e6f5cf97ebc565c997d3f1ed9a010a8e8500041ffe0a99fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" Jan 14 01:32:33.984746 kubelet[2869]: E0114 01:32:33.984578 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c46b477-q67mc_calico-apiserver(ff2a83bd-ca30-4810-bc00-617909aaca25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c46b477-q67mc_calico-apiserver(ff2a83bd-ca30-4810-bc00-617909aaca25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bc15d3e55d38a8e6f5cf97ebc565c997d3f1ed9a010a8e8500041ffe0a99fa1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:32:34.629614 systemd[1]: run-netns-cni\x2d86a5f1db\x2d152e\x2d454a\x2dc68e\x2d1277812063b6.mount: Deactivated successfully. Jan 14 01:32:34.630263 systemd[1]: run-netns-cni\x2dda547942\x2d5910\x2d8914\x2df963\x2dcafba3bc938a.mount: Deactivated successfully. Jan 14 01:32:34.633306 systemd[1]: run-netns-cni\x2d7703ccd9\x2df0ea\x2d3e5b\x2d87c6\x2d41fb37a69ed4.mount: Deactivated successfully. Jan 14 01:32:34.633636 systemd[1]: run-netns-cni\x2d61286372\x2d4461\x2da3af\x2d8ac6\x2d67debee0fe60.mount: Deactivated successfully. Jan 14 01:32:41.758319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3282899119.mount: Deactivated successfully. Jan 14 01:32:41.805261 containerd[1601]: time="2026-01-14T01:32:41.804879985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:32:41.806615 containerd[1601]: time="2026-01-14T01:32:41.806544562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883537" Jan 14 01:32:41.809522 containerd[1601]: time="2026-01-14T01:32:41.809428088Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:32:41.813729 containerd[1601]: time="2026-01-14T01:32:41.813478421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:32:41.813857 containerd[1601]: time="2026-01-14T01:32:41.813823574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 23.520702933s" Jan 14 01:32:41.814026 containerd[1601]: time="2026-01-14T01:32:41.813858259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:32:41.878618 containerd[1601]: time="2026-01-14T01:32:41.878540344Z" level=info msg="CreateContainer within sandbox \"be12c9fab17abf1001454fc9a83e4f32448708ce57ffcbbcc235c5d97ec7841e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:32:41.907845 containerd[1601]: time="2026-01-14T01:32:41.907724821Z" level=info msg="Container d89a5b230bbeb199dfa39d120178430ea773bffcfdf1aab04619fe80429f4410: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:32:41.930948 containerd[1601]: time="2026-01-14T01:32:41.930790497Z" level=info msg="CreateContainer within sandbox \"be12c9fab17abf1001454fc9a83e4f32448708ce57ffcbbcc235c5d97ec7841e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d89a5b230bbeb199dfa39d120178430ea773bffcfdf1aab04619fe80429f4410\"" Jan 14 01:32:41.934014 containerd[1601]: time="2026-01-14T01:32:41.932085917Z" level=info msg="StartContainer for \"d89a5b230bbeb199dfa39d120178430ea773bffcfdf1aab04619fe80429f4410\"" Jan 14 01:32:41.935839 containerd[1601]: time="2026-01-14T01:32:41.935486268Z" level=info msg="connecting to shim d89a5b230bbeb199dfa39d120178430ea773bffcfdf1aab04619fe80429f4410" address="unix:///run/containerd/s/90901150b335529d026808932b08feeab60441059faa030dca6b8ba96c724879" protocol=ttrpc version=3 Jan 14 01:32:42.054428 systemd[1]: Started cri-containerd-d89a5b230bbeb199dfa39d120178430ea773bffcfdf1aab04619fe80429f4410.scope - libcontainer container d89a5b230bbeb199dfa39d120178430ea773bffcfdf1aab04619fe80429f4410. Jan 14 01:32:42.149000 audit: BPF prog-id=170 op=LOAD Jan 14 01:32:42.157084 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:32:42.157816 kernel: audit: type=1334 audit(1768354362.149:562): prog-id=170 op=LOAD Jan 14 01:32:42.149000 audit[3944]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3425 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:42.176878 kernel: audit: type=1300 audit(1768354362.149:562): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3425 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:42.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438396135623233306262656231393964666133396431323031373834 Jan 14 01:32:42.149000 audit: BPF prog-id=171 op=LOAD Jan 14 01:32:42.198667 kernel: audit: type=1327 audit(1768354362.149:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438396135623233306262656231393964666133396431323031373834 Jan 14 01:32:42.198737 kernel: audit: type=1334 audit(1768354362.149:563): prog-id=171 op=LOAD Jan 14 01:32:42.198765 kernel: audit: type=1300 audit(1768354362.149:563): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3425 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:42.149000 audit[3944]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3425 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:42.216439 kernel: audit: type=1327 audit(1768354362.149:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438396135623233306262656231393964666133396431323031373834 Jan 14 01:32:42.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438396135623233306262656231393964666133396431323031373834 Jan 14 01:32:42.149000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:32:42.149000 audit[3944]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:42.246415 containerd[1601]: time="2026-01-14T01:32:42.246387084Z" level=info msg="StartContainer for \"d89a5b230bbeb199dfa39d120178430ea773bffcfdf1aab04619fe80429f4410\" returns successfully" Jan 14 01:32:42.256745 kernel: audit: type=1334 audit(1768354362.149:564): prog-id=171 op=UNLOAD Jan 14 01:32:42.256835 kernel: audit: type=1300 audit(1768354362.149:564): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:42.256860 kernel: audit: type=1327 audit(1768354362.149:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438396135623233306262656231393964666133396431323031373834 Jan 14 01:32:42.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438396135623233306262656231393964666133396431323031373834 Jan 14 01:32:42.149000 audit: BPF prog-id=170 op=UNLOAD Jan 14 01:32:42.288031 kernel: audit: type=1334 audit(1768354362.149:565): prog-id=170 op=UNLOAD Jan 14 01:32:42.149000 audit[3944]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3425 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:42.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438396135623233306262656231393964666133396431323031373834 Jan 14 01:32:42.149000 audit: BPF prog-id=172 op=LOAD Jan 14 01:32:42.149000 audit[3944]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3425 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:42.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438396135623233306262656231393964666133396431323031373834 Jan 14 01:32:42.559855 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:32:42.564574 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:32:42.764518 kubelet[2869]: E0114 01:32:42.763355 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:42.917634 kubelet[2869]: I0114 01:32:42.916625 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pgr6m" podStartSLOduration=2.614776065 podStartE2EDuration="51.916441235s" podCreationTimestamp="2026-01-14 01:31:51 +0000 UTC" firstStartedPulling="2026-01-14 01:31:52.526668499 +0000 UTC m=+37.175940080" lastFinishedPulling="2026-01-14 01:32:41.82833367 +0000 UTC m=+86.477605250" observedRunningTime="2026-01-14 01:32:42.866139628 +0000 UTC m=+87.515411219" watchObservedRunningTime="2026-01-14 01:32:42.916441235 +0000 UTC m=+87.565712815" Jan 14 01:32:42.994018 kubelet[2869]: I0114 01:32:42.993593 2869 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c72m2\" (UniqueName: \"kubernetes.io/projected/b77f400e-9c3a-49b0-abc0-b3cf084634d3-kube-api-access-c72m2\") pod \"b77f400e-9c3a-49b0-abc0-b3cf084634d3\" (UID: \"b77f400e-9c3a-49b0-abc0-b3cf084634d3\") " Jan 14 01:32:42.994018 kubelet[2869]: I0114 01:32:42.993660 2869 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b77f400e-9c3a-49b0-abc0-b3cf084634d3-whisker-ca-bundle\") pod \"b77f400e-9c3a-49b0-abc0-b3cf084634d3\" (UID: \"b77f400e-9c3a-49b0-abc0-b3cf084634d3\") " Jan 14 01:32:42.994018 kubelet[2869]: I0114 01:32:42.993703 2869 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b77f400e-9c3a-49b0-abc0-b3cf084634d3-whisker-backend-key-pair\") pod \"b77f400e-9c3a-49b0-abc0-b3cf084634d3\" (UID: \"b77f400e-9c3a-49b0-abc0-b3cf084634d3\") " Jan 14 01:32:42.999052 kubelet[2869]: I0114 01:32:42.997746 2869 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b77f400e-9c3a-49b0-abc0-b3cf084634d3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b77f400e-9c3a-49b0-abc0-b3cf084634d3" (UID: "b77f400e-9c3a-49b0-abc0-b3cf084634d3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:32:43.018090 systemd[1]: var-lib-kubelet-pods-b77f400e\x2d9c3a\x2d49b0\x2dabc0\x2db3cf084634d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc72m2.mount: Deactivated successfully. Jan 14 01:32:43.018294 systemd[1]: var-lib-kubelet-pods-b77f400e\x2d9c3a\x2d49b0\x2dabc0\x2db3cf084634d3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:32:43.035498 kubelet[2869]: I0114 01:32:43.035390 2869 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b77f400e-9c3a-49b0-abc0-b3cf084634d3-kube-api-access-c72m2" (OuterVolumeSpecName: "kube-api-access-c72m2") pod "b77f400e-9c3a-49b0-abc0-b3cf084634d3" (UID: "b77f400e-9c3a-49b0-abc0-b3cf084634d3"). InnerVolumeSpecName "kube-api-access-c72m2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:32:43.036680 kubelet[2869]: I0114 01:32:43.035582 2869 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b77f400e-9c3a-49b0-abc0-b3cf084634d3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b77f400e-9c3a-49b0-abc0-b3cf084634d3" (UID: "b77f400e-9c3a-49b0-abc0-b3cf084634d3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:32:43.095064 kubelet[2869]: I0114 01:32:43.094584 2869 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c72m2\" (UniqueName: \"kubernetes.io/projected/b77f400e-9c3a-49b0-abc0-b3cf084634d3-kube-api-access-c72m2\") on node \"localhost\" DevicePath \"\"" Jan 14 01:32:43.095064 kubelet[2869]: I0114 01:32:43.094700 2869 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b77f400e-9c3a-49b0-abc0-b3cf084634d3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 14 01:32:43.095064 kubelet[2869]: I0114 01:32:43.094719 2869 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b77f400e-9c3a-49b0-abc0-b3cf084634d3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 14 01:32:43.097056 kubelet[2869]: E0114 01:32:43.096777 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:43.161810 systemd[1]: Removed slice kubepods-besteffort-podb77f400e_9c3a_49b0_abc0_b3cf084634d3.slice - libcontainer container kubepods-besteffort-podb77f400e_9c3a_49b0_abc0_b3cf084634d3.slice. Jan 14 01:32:43.763803 kubelet[2869]: E0114 01:32:43.763636 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:43.951771 systemd[1]: Created slice kubepods-besteffort-pod1f7ed930_9020_4e7b_a11b_c469857f7fe1.slice - libcontainer container kubepods-besteffort-pod1f7ed930_9020_4e7b_a11b_c469857f7fe1.slice. Jan 14 01:32:44.024446 kubelet[2869]: I0114 01:32:44.023436 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f7ed930-9020-4e7b-a11b-c469857f7fe1-whisker-ca-bundle\") pod \"whisker-b769697d-jcx4g\" (UID: \"1f7ed930-9020-4e7b-a11b-c469857f7fe1\") " pod="calico-system/whisker-b769697d-jcx4g" Jan 14 01:32:44.024446 kubelet[2869]: I0114 01:32:44.023570 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1f7ed930-9020-4e7b-a11b-c469857f7fe1-whisker-backend-key-pair\") pod \"whisker-b769697d-jcx4g\" (UID: \"1f7ed930-9020-4e7b-a11b-c469857f7fe1\") " pod="calico-system/whisker-b769697d-jcx4g" Jan 14 01:32:44.024446 kubelet[2869]: I0114 01:32:44.023615 2869 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cx6z\" (UniqueName: \"kubernetes.io/projected/1f7ed930-9020-4e7b-a11b-c469857f7fe1-kube-api-access-9cx6z\") pod \"whisker-b769697d-jcx4g\" (UID: \"1f7ed930-9020-4e7b-a11b-c469857f7fe1\") " pod="calico-system/whisker-b769697d-jcx4g" Jan 14 01:32:44.257598 containerd[1601]: time="2026-01-14T01:32:44.257481724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b769697d-jcx4g,Uid:1f7ed930-9020-4e7b-a11b-c469857f7fe1,Namespace:calico-system,Attempt:0,}" Jan 14 01:32:44.887261 systemd-networkd[1498]: cali5faeade9679: Link UP Jan 14 01:32:44.891129 systemd-networkd[1498]: cali5faeade9679: Gained carrier Jan 14 01:32:44.952625 containerd[1601]: 2026-01-14 01:32:44.315 [INFO][4064] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:32:44.952625 containerd[1601]: 2026-01-14 01:32:44.357 [INFO][4064] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--b769697d--jcx4g-eth0 whisker-b769697d- calico-system 1f7ed930-9020-4e7b-a11b-c469857f7fe1 1026 0 2026-01-14 01:32:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b769697d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-b769697d-jcx4g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5faeade9679 [] [] }} ContainerID="16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" Namespace="calico-system" Pod="whisker-b769697d-jcx4g" WorkloadEndpoint="localhost-k8s-whisker--b769697d--jcx4g-" Jan 14 01:32:44.952625 containerd[1601]: 2026-01-14 01:32:44.357 [INFO][4064] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" Namespace="calico-system" Pod="whisker-b769697d-jcx4g" WorkloadEndpoint="localhost-k8s-whisker--b769697d--jcx4g-eth0" Jan 14 01:32:44.952625 containerd[1601]: 2026-01-14 01:32:44.578 [INFO][4078] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" HandleID="k8s-pod-network.16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" Workload="localhost-k8s-whisker--b769697d--jcx4g-eth0" Jan 14 01:32:44.953053 containerd[1601]: 2026-01-14 01:32:44.581 [INFO][4078] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" HandleID="k8s-pod-network.16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" Workload="localhost-k8s-whisker--b769697d--jcx4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d52c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-b769697d-jcx4g", "timestamp":"2026-01-14 01:32:44.578087756 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:32:44.953053 containerd[1601]: 2026-01-14 01:32:44.581 [INFO][4078] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:32:44.953053 containerd[1601]: 2026-01-14 01:32:44.583 [INFO][4078] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:32:44.953053 containerd[1601]: 2026-01-14 01:32:44.584 [INFO][4078] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:32:44.953053 containerd[1601]: 2026-01-14 01:32:44.637 [INFO][4078] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" host="localhost" Jan 14 01:32:44.953053 containerd[1601]: 2026-01-14 01:32:44.683 [INFO][4078] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:32:44.953053 containerd[1601]: 2026-01-14 01:32:44.732 [INFO][4078] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:32:44.953053 containerd[1601]: 2026-01-14 01:32:44.741 [INFO][4078] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:44.953053 containerd[1601]: 2026-01-14 01:32:44.760 [INFO][4078] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:44.953053 containerd[1601]: 2026-01-14 01:32:44.763 [INFO][4078] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" host="localhost" Jan 14 01:32:44.953786 containerd[1601]: 2026-01-14 01:32:44.775 [INFO][4078] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33 Jan 14 01:32:44.953786 containerd[1601]: 2026-01-14 01:32:44.797 [INFO][4078] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" host="localhost" Jan 14 01:32:44.953786 containerd[1601]: 2026-01-14 01:32:44.821 [INFO][4078] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" host="localhost" Jan 14 01:32:44.953786 containerd[1601]: 2026-01-14 01:32:44.822 [INFO][4078] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" host="localhost" Jan 14 01:32:44.953786 containerd[1601]: 2026-01-14 01:32:44.823 [INFO][4078] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:32:44.953786 containerd[1601]: 2026-01-14 01:32:44.823 [INFO][4078] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" HandleID="k8s-pod-network.16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" Workload="localhost-k8s-whisker--b769697d--jcx4g-eth0" Jan 14 01:32:44.954098 containerd[1601]: 2026-01-14 01:32:44.829 [INFO][4064] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" Namespace="calico-system" Pod="whisker-b769697d-jcx4g" WorkloadEndpoint="localhost-k8s-whisker--b769697d--jcx4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b769697d--jcx4g-eth0", GenerateName:"whisker-b769697d-", Namespace:"calico-system", SelfLink:"", UID:"1f7ed930-9020-4e7b-a11b-c469857f7fe1", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b769697d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-b769697d-jcx4g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5faeade9679", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:44.954098 containerd[1601]: 2026-01-14 01:32:44.829 [INFO][4064] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" Namespace="calico-system" Pod="whisker-b769697d-jcx4g" WorkloadEndpoint="localhost-k8s-whisker--b769697d--jcx4g-eth0" Jan 14 01:32:44.954363 containerd[1601]: 2026-01-14 01:32:44.829 [INFO][4064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5faeade9679 ContainerID="16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" Namespace="calico-system" Pod="whisker-b769697d-jcx4g" WorkloadEndpoint="localhost-k8s-whisker--b769697d--jcx4g-eth0" Jan 14 01:32:44.954363 containerd[1601]: 2026-01-14 01:32:44.881 [INFO][4064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" Namespace="calico-system" Pod="whisker-b769697d-jcx4g" WorkloadEndpoint="localhost-k8s-whisker--b769697d--jcx4g-eth0" Jan 14 01:32:44.954499 containerd[1601]: 2026-01-14 01:32:44.892 [INFO][4064] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" Namespace="calico-system" Pod="whisker-b769697d-jcx4g" WorkloadEndpoint="localhost-k8s-whisker--b769697d--jcx4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b769697d--jcx4g-eth0", GenerateName:"whisker-b769697d-", Namespace:"calico-system", SelfLink:"", UID:"1f7ed930-9020-4e7b-a11b-c469857f7fe1", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b769697d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33", Pod:"whisker-b769697d-jcx4g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5faeade9679", MAC:"3e:b5:c5:12:2c:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:44.954674 containerd[1601]: 2026-01-14 01:32:44.942 [INFO][4064] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" Namespace="calico-system" Pod="whisker-b769697d-jcx4g" WorkloadEndpoint="localhost-k8s-whisker--b769697d--jcx4g-eth0" Jan 14 01:32:45.104455 containerd[1601]: time="2026-01-14T01:32:45.103400757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c46b477-wkc27,Uid:c32ecf43-33bb-4f07-8af2-75af73cd7967,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:32:45.122035 containerd[1601]: time="2026-01-14T01:32:45.121824493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c46b477-q67mc,Uid:ff2a83bd-ca30-4810-bc00-617909aaca25,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:32:45.128851 kubelet[2869]: I0114 01:32:45.128807 2869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b77f400e-9c3a-49b0-abc0-b3cf084634d3" path="/var/lib/kubelet/pods/b77f400e-9c3a-49b0-abc0-b3cf084634d3/volumes" Jan 14 01:32:45.391467 containerd[1601]: time="2026-01-14T01:32:45.391114868Z" level=info msg="connecting to shim 16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33" address="unix:///run/containerd/s/c9e84c058f0ff9cab42fbb90afe98f6cb56dd4c287d59b42338ca8f20629fbe7" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:32:45.401000 audit: BPF prog-id=173 op=LOAD Jan 14 01:32:45.401000 audit[4273]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc246571f0 a2=98 a3=1fffffffffffffff items=0 ppid=4121 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.401000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:32:45.401000 audit: BPF prog-id=173 op=UNLOAD Jan 14 01:32:45.401000 audit[4273]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc246571c0 a3=0 items=0 ppid=4121 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.401000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:32:45.401000 audit: BPF prog-id=174 op=LOAD Jan 14 01:32:45.401000 audit[4273]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc246570d0 a2=94 a3=3 items=0 ppid=4121 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.401000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:32:45.401000 audit: BPF prog-id=174 op=UNLOAD Jan 14 01:32:45.401000 audit[4273]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc246570d0 a2=94 a3=3 items=0 ppid=4121 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.401000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:32:45.401000 audit: BPF prog-id=175 op=LOAD Jan 14 01:32:45.401000 audit[4273]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc24657110 a2=94 a3=7ffc246572f0 items=0 ppid=4121 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.401000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:32:45.401000 audit: BPF prog-id=175 op=UNLOAD Jan 14 01:32:45.401000 audit[4273]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc24657110 a2=94 a3=7ffc246572f0 items=0 ppid=4121 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.401000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:32:45.414000 audit: BPF prog-id=176 op=LOAD Jan 14 01:32:45.414000 audit[4274]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde9720730 a2=98 a3=3 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.414000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:32:45.414000 audit[4274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffde9720700 a3=0 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.415000 audit: BPF prog-id=177 op=LOAD Jan 14 01:32:45.415000 audit[4274]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde9720520 a2=94 a3=54428f items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.415000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.415000 audit: BPF prog-id=177 op=UNLOAD Jan 14 01:32:45.415000 audit[4274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde9720520 a2=94 a3=54428f items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.415000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.415000 audit: BPF prog-id=178 op=LOAD Jan 14 01:32:45.415000 audit[4274]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde9720550 a2=94 a3=2 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.415000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.415000 audit: BPF prog-id=178 op=UNLOAD Jan 14 01:32:45.415000 audit[4274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde9720550 a2=0 a3=2 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.415000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.496729 systemd[1]: Started cri-containerd-16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33.scope - libcontainer container 16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33. Jan 14 01:32:45.546000 audit: BPF prog-id=179 op=LOAD Jan 14 01:32:45.548000 audit: BPF prog-id=180 op=LOAD Jan 14 01:32:45.548000 audit[4277]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4259 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136663735626666303334393562333430396561393636383266326535 Jan 14 01:32:45.551000 audit: BPF prog-id=180 op=UNLOAD Jan 14 01:32:45.551000 audit[4277]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4259 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136663735626666303334393562333430396561393636383266326535 Jan 14 01:32:45.553000 audit: BPF prog-id=181 op=LOAD Jan 14 01:32:45.553000 audit[4277]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4259 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136663735626666303334393562333430396561393636383266326535 Jan 14 01:32:45.555000 audit: BPF prog-id=182 op=LOAD Jan 14 01:32:45.555000 audit[4277]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4259 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136663735626666303334393562333430396561393636383266326535 Jan 14 01:32:45.556000 audit: BPF prog-id=182 op=UNLOAD Jan 14 01:32:45.556000 audit[4277]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4259 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136663735626666303334393562333430396561393636383266326535 Jan 14 01:32:45.556000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:32:45.556000 audit[4277]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4259 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136663735626666303334393562333430396561393636383266326535 Jan 14 01:32:45.557000 audit: BPF prog-id=183 op=LOAD Jan 14 01:32:45.557000 audit[4277]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4259 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136663735626666303334393562333430396561393636383266326535 Jan 14 01:32:45.568561 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:32:45.692571 containerd[1601]: time="2026-01-14T01:32:45.691777799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b769697d-jcx4g,Uid:1f7ed930-9020-4e7b-a11b-c469857f7fe1,Namespace:calico-system,Attempt:0,} returns sandbox id \"16f75bff03495b3409ea96682f2e511643a5bf8cebd523f71d07a8917dd24f33\"" Jan 14 01:32:45.703308 containerd[1601]: time="2026-01-14T01:32:45.701232159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:32:45.713273 systemd-networkd[1498]: calif247713f823: Link UP Jan 14 01:32:45.713633 systemd-networkd[1498]: calif247713f823: Gained carrier Jan 14 01:32:45.777114 containerd[1601]: 2026-01-14 01:32:45.423 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0 calico-apiserver-77c46b477- calico-apiserver ff2a83bd-ca30-4810-bc00-617909aaca25 956 0 2026-01-14 01:31:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c46b477 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c46b477-q67mc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif247713f823 [] [] }} ContainerID="a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-q67mc" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--q67mc-" Jan 14 01:32:45.777114 containerd[1601]: 2026-01-14 01:32:45.424 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-q67mc" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0" Jan 14 01:32:45.777114 containerd[1601]: 2026-01-14 01:32:45.543 [INFO][4288] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" HandleID="k8s-pod-network.a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" Workload="localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0" Jan 14 01:32:45.777592 containerd[1601]: 2026-01-14 01:32:45.543 [INFO][4288] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" HandleID="k8s-pod-network.a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" Workload="localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000585490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c46b477-q67mc", "timestamp":"2026-01-14 01:32:45.543084929 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:32:45.777592 containerd[1601]: 2026-01-14 01:32:45.544 [INFO][4288] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:32:45.777592 containerd[1601]: 2026-01-14 01:32:45.544 [INFO][4288] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:32:45.777592 containerd[1601]: 2026-01-14 01:32:45.544 [INFO][4288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:32:45.777592 containerd[1601]: 2026-01-14 01:32:45.565 [INFO][4288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" host="localhost" Jan 14 01:32:45.777592 containerd[1601]: 2026-01-14 01:32:45.580 [INFO][4288] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:32:45.777592 containerd[1601]: 2026-01-14 01:32:45.606 [INFO][4288] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:32:45.777592 containerd[1601]: 2026-01-14 01:32:45.614 [INFO][4288] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:45.777592 containerd[1601]: 2026-01-14 01:32:45.631 [INFO][4288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:45.777592 containerd[1601]: 2026-01-14 01:32:45.631 [INFO][4288] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" host="localhost" Jan 14 01:32:45.780402 containerd[1601]: 2026-01-14 01:32:45.636 [INFO][4288] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d Jan 14 01:32:45.780402 containerd[1601]: 2026-01-14 01:32:45.655 [INFO][4288] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" host="localhost" Jan 14 01:32:45.780402 containerd[1601]: 2026-01-14 01:32:45.671 [INFO][4288] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" host="localhost" Jan 14 01:32:45.780402 containerd[1601]: 2026-01-14 01:32:45.671 [INFO][4288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" host="localhost" Jan 14 01:32:45.780402 containerd[1601]: 2026-01-14 01:32:45.671 [INFO][4288] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:32:45.780402 containerd[1601]: 2026-01-14 01:32:45.671 [INFO][4288] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" HandleID="k8s-pod-network.a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" Workload="localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0" Jan 14 01:32:45.780597 containerd[1601]: 2026-01-14 01:32:45.677 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-q67mc" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0", GenerateName:"calico-apiserver-77c46b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff2a83bd-ca30-4810-bc00-617909aaca25", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c46b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c46b477-q67mc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif247713f823", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:45.780780 containerd[1601]: 2026-01-14 01:32:45.677 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-q67mc" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0" Jan 14 01:32:45.780780 containerd[1601]: 2026-01-14 01:32:45.677 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif247713f823 ContainerID="a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-q67mc" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0" Jan 14 01:32:45.780780 containerd[1601]: 2026-01-14 01:32:45.717 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-q67mc" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0" Jan 14 01:32:45.781012 containerd[1601]: 2026-01-14 01:32:45.720 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-q67mc" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0", GenerateName:"calico-apiserver-77c46b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff2a83bd-ca30-4810-bc00-617909aaca25", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c46b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d", Pod:"calico-apiserver-77c46b477-q67mc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif247713f823", MAC:"ce:74:5c:8d:0c:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:45.781257 containerd[1601]: 2026-01-14 01:32:45.769 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-q67mc" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--q67mc-eth0" Jan 14 01:32:45.810882 containerd[1601]: time="2026-01-14T01:32:45.810548765Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:32:45.819881 containerd[1601]: time="2026-01-14T01:32:45.819834693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:32:45.820820 containerd[1601]: time="2026-01-14T01:32:45.820791277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:32:45.821699 kubelet[2869]: E0114 01:32:45.821325 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:32:45.821699 kubelet[2869]: E0114 01:32:45.821608 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:32:45.822489 kubelet[2869]: E0114 01:32:45.822373 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e5d51c992c994bfdbf53b4556ecb9a0e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b769697d-jcx4g_calico-system(1f7ed930-9020-4e7b-a11b-c469857f7fe1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:32:45.824000 audit: BPF prog-id=184 op=LOAD Jan 14 01:32:45.824000 audit[4274]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde9720410 a2=94 a3=1 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.824000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.826000 audit: BPF prog-id=184 op=UNLOAD Jan 14 01:32:45.826000 audit[4274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde9720410 a2=94 a3=1 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.826000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.831441 containerd[1601]: time="2026-01-14T01:32:45.831376550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:32:45.846000 audit: BPF prog-id=185 op=LOAD Jan 14 01:32:45.846000 audit[4274]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde9720400 a2=94 a3=4 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.846000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.847000 audit: BPF prog-id=185 op=UNLOAD Jan 14 01:32:45.847000 audit[4274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde9720400 a2=0 a3=4 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.847000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.847000 audit: BPF prog-id=186 op=LOAD Jan 14 01:32:45.847000 audit[4274]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffde9720260 a2=94 a3=5 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.847000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.847000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:32:45.847000 audit[4274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffde9720260 a2=0 a3=5 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.847000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.847000 audit: BPF prog-id=187 op=LOAD Jan 14 01:32:45.847000 audit[4274]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde9720480 a2=94 a3=6 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.847000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.847000 audit: BPF prog-id=187 op=UNLOAD Jan 14 01:32:45.847000 audit[4274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde9720480 a2=0 a3=6 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.847000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.848000 audit: BPF prog-id=188 op=LOAD Jan 14 01:32:45.848000 audit[4274]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde971fc30 a2=94 a3=88 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.848000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.849000 audit: BPF prog-id=189 op=LOAD Jan 14 01:32:45.849000 audit[4274]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffde971fab0 a2=94 a3=2 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.849000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.849000 audit: BPF prog-id=189 op=UNLOAD Jan 14 01:32:45.849000 audit[4274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffde971fae0 a2=0 a3=7ffde971fbe0 items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.849000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.850000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:32:45.850000 audit[4274]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1ed6ed10 a2=0 a3=cb5fb1116e4b51bb items=0 ppid=4121 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.850000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:32:45.893000 audit: BPF prog-id=190 op=LOAD Jan 14 01:32:45.893000 audit[4338]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfb810160 a2=98 a3=1999999999999999 items=0 ppid=4121 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.893000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:32:45.894000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:32:45.894000 audit[4338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcfb810130 a3=0 items=0 ppid=4121 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.894000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:32:45.894000 audit: BPF prog-id=191 op=LOAD Jan 14 01:32:45.894000 audit[4338]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfb810040 a2=94 a3=ffff items=0 ppid=4121 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.894000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:32:45.894000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:32:45.894000 audit[4338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcfb810040 a2=94 a3=ffff items=0 ppid=4121 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.894000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:32:45.894000 audit: BPF prog-id=192 op=LOAD Jan 14 01:32:45.894000 audit[4338]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfb810080 a2=94 a3=7ffcfb810260 items=0 ppid=4121 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.894000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:32:45.894000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:32:45.894000 audit[4338]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcfb810080 a2=94 a3=7ffcfb810260 items=0 ppid=4121 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:45.894000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:32:45.907753 systemd-networkd[1498]: caliba08acaff58: Link UP Jan 14 01:32:45.911125 containerd[1601]: time="2026-01-14T01:32:45.910558985Z" level=info msg="connecting to shim a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d" address="unix:///run/containerd/s/2cc0ba54517ded90d374d2ef97279a4f2375b241e55d057e6208df77e98aafe5" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:32:45.918154 containerd[1601]: time="2026-01-14T01:32:45.918130133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:32:45.923099 systemd-networkd[1498]: caliba08acaff58: Gained carrier Jan 14 01:32:45.925317 containerd[1601]: time="2026-01-14T01:32:45.925284153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:32:45.925815 containerd[1601]: time="2026-01-14T01:32:45.925400621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:32:45.929843 kubelet[2869]: E0114 01:32:45.926151 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:32:45.929843 kubelet[2869]: E0114 01:32:45.926300 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:32:45.930601 kubelet[2869]: E0114 01:32:45.926475 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b769697d-jcx4g_calico-system(1f7ed930-9020-4e7b-a11b-c469857f7fe1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:32:45.932361 kubelet[2869]: E0114 01:32:45.931134 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:32:45.981474 containerd[1601]: 2026-01-14 01:32:45.396 [INFO][4222] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0 calico-apiserver-77c46b477- calico-apiserver c32ecf43-33bb-4f07-8af2-75af73cd7967 952 0 2026-01-14 01:31:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c46b477 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c46b477-wkc27 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliba08acaff58 [] [] }} ContainerID="fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-wkc27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--wkc27-" Jan 14 01:32:45.981474 containerd[1601]: 2026-01-14 01:32:45.398 [INFO][4222] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-wkc27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0" Jan 14 01:32:45.981474 containerd[1601]: 2026-01-14 01:32:45.560 [INFO][4279] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" HandleID="k8s-pod-network.fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" Workload="localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0" Jan 14 01:32:45.981742 containerd[1601]: 2026-01-14 01:32:45.561 [INFO][4279] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" HandleID="k8s-pod-network.fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" Workload="localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ff10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c46b477-wkc27", "timestamp":"2026-01-14 01:32:45.559801551 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:32:45.981742 containerd[1601]: 2026-01-14 01:32:45.562 [INFO][4279] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:32:45.981742 containerd[1601]: 2026-01-14 01:32:45.671 [INFO][4279] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:32:45.981742 containerd[1601]: 2026-01-14 01:32:45.672 [INFO][4279] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:32:45.981742 containerd[1601]: 2026-01-14 01:32:45.692 [INFO][4279] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" host="localhost" Jan 14 01:32:45.981742 containerd[1601]: 2026-01-14 01:32:45.733 [INFO][4279] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:32:45.981742 containerd[1601]: 2026-01-14 01:32:45.781 [INFO][4279] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:32:45.981742 containerd[1601]: 2026-01-14 01:32:45.794 [INFO][4279] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:45.981742 containerd[1601]: 2026-01-14 01:32:45.807 [INFO][4279] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:45.981742 containerd[1601]: 2026-01-14 01:32:45.807 [INFO][4279] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" host="localhost" Jan 14 01:32:45.982511 containerd[1601]: 2026-01-14 01:32:45.816 [INFO][4279] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64 Jan 14 01:32:45.982511 containerd[1601]: 2026-01-14 01:32:45.845 [INFO][4279] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" host="localhost" Jan 14 01:32:45.982511 containerd[1601]: 2026-01-14 01:32:45.888 [INFO][4279] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" host="localhost" Jan 14 01:32:45.982511 containerd[1601]: 2026-01-14 01:32:45.888 [INFO][4279] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" host="localhost" Jan 14 01:32:45.982511 containerd[1601]: 2026-01-14 01:32:45.888 [INFO][4279] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:32:45.982511 containerd[1601]: 2026-01-14 01:32:45.888 [INFO][4279] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" HandleID="k8s-pod-network.fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" Workload="localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0" Jan 14 01:32:45.982816 containerd[1601]: 2026-01-14 01:32:45.898 [INFO][4222] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-wkc27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0", GenerateName:"calico-apiserver-77c46b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"c32ecf43-33bb-4f07-8af2-75af73cd7967", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c46b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c46b477-wkc27", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba08acaff58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:45.987534 containerd[1601]: 2026-01-14 01:32:45.898 [INFO][4222] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-wkc27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0" Jan 14 01:32:45.987534 containerd[1601]: 2026-01-14 01:32:45.898 [INFO][4222] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba08acaff58 ContainerID="fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-wkc27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0" Jan 14 01:32:45.987534 containerd[1601]: 2026-01-14 01:32:45.924 [INFO][4222] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-wkc27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0" Jan 14 01:32:45.987666 containerd[1601]: 2026-01-14 01:32:45.938 [INFO][4222] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-wkc27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0", GenerateName:"calico-apiserver-77c46b477-", Namespace:"calico-apiserver", SelfLink:"", UID:"c32ecf43-33bb-4f07-8af2-75af73cd7967", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c46b477", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64", Pod:"calico-apiserver-77c46b477-wkc27", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba08acaff58", MAC:"52:3d:90:10:b9:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:45.987838 containerd[1601]: 2026-01-14 01:32:45.969 [INFO][4222] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" Namespace="calico-apiserver" Pod="calico-apiserver-77c46b477-wkc27" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c46b477--wkc27-eth0" Jan 14 01:32:46.000521 systemd[1]: Started cri-containerd-a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d.scope - libcontainer container a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d. Jan 14 01:32:46.081680 containerd[1601]: time="2026-01-14T01:32:46.081620307Z" level=info msg="connecting to shim fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64" address="unix:///run/containerd/s/76143a7e3bf727a0da9e1942888cc70ffbfbe28a20fa54d466f327a088eea0f4" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:32:46.100774 containerd[1601]: time="2026-01-14T01:32:46.099157265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5vwfg,Uid:1821a0db-e895-49f0-8081-ae8dd6cf61e7,Namespace:calico-system,Attempt:0,}" Jan 14 01:32:46.123059 containerd[1601]: time="2026-01-14T01:32:46.121107520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546579f487-48d5w,Uid:35648de2-563a-403b-bdd1-f0409de12a27,Namespace:calico-system,Attempt:0,}" Jan 14 01:32:46.123059 containerd[1601]: time="2026-01-14T01:32:46.126073600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jt56,Uid:a92d2670-8bc7-4318-8d73-b12be2d0a45e,Namespace:calico-system,Attempt:0,}" Jan 14 01:32:46.170000 audit: BPF prog-id=193 op=LOAD Jan 14 01:32:46.169449 systemd-networkd[1498]: vxlan.calico: Link UP Jan 14 01:32:46.169463 systemd-networkd[1498]: vxlan.calico: Gained carrier Jan 14 01:32:46.171000 audit: BPF prog-id=194 op=LOAD Jan 14 01:32:46.171000 audit[4360]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4340 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134356661366461333330633739653664306337643735646162653138 Jan 14 01:32:46.171000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:32:46.171000 audit[4360]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4340 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134356661366461333330633739653664306337643735646162653138 Jan 14 01:32:46.173000 audit: BPF prog-id=195 op=LOAD Jan 14 01:32:46.173000 audit[4360]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4340 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134356661366461333330633739653664306337643735646162653138 Jan 14 01:32:46.173000 audit: BPF prog-id=196 op=LOAD Jan 14 01:32:46.173000 audit[4360]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4340 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134356661366461333330633739653664306337643735646162653138 Jan 14 01:32:46.173000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:32:46.173000 audit[4360]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4340 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134356661366461333330633739653664306337643735646162653138 Jan 14 01:32:46.173000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:32:46.173000 audit[4360]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4340 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134356661366461333330633739653664306337643735646162653138 Jan 14 01:32:46.173000 audit: BPF prog-id=197 op=LOAD Jan 14 01:32:46.173000 audit[4360]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4340 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.173000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134356661366461333330633739653664306337643735646162653138 Jan 14 01:32:46.179024 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:32:46.284413 systemd-networkd[1498]: cali5faeade9679: Gained IPv6LL Jan 14 01:32:46.313351 systemd[1]: Started cri-containerd-fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64.scope - libcontainer container fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64. Jan 14 01:32:46.320000 audit: BPF prog-id=198 op=LOAD Jan 14 01:32:46.320000 audit[4475]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb54143b0 a2=98 a3=0 items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.320000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.321000 audit: BPF prog-id=198 op=UNLOAD Jan 14 01:32:46.321000 audit[4475]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdb5414380 a3=0 items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.321000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.321000 audit: BPF prog-id=199 op=LOAD Jan 14 01:32:46.321000 audit[4475]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb54141c0 a2=94 a3=54428f items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.321000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.321000 audit: BPF prog-id=199 op=UNLOAD Jan 14 01:32:46.321000 audit[4475]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdb54141c0 a2=94 a3=54428f items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.321000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.321000 audit: BPF prog-id=200 op=LOAD Jan 14 01:32:46.321000 audit[4475]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb54141f0 a2=94 a3=2 items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.321000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.322000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:32:46.322000 audit[4475]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdb54141f0 a2=0 a3=2 items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.322000 audit: BPF prog-id=201 op=LOAD Jan 14 01:32:46.322000 audit[4475]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb5413fa0 a2=94 a3=4 items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.322000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:32:46.322000 audit[4475]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdb5413fa0 a2=94 a3=4 items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.322000 audit: BPF prog-id=202 op=LOAD Jan 14 01:32:46.322000 audit[4475]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb54140a0 a2=94 a3=7ffdb5414220 items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.322000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:32:46.322000 audit[4475]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdb54140a0 a2=0 a3=7ffdb5414220 items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.322000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.326000 audit: BPF prog-id=203 op=LOAD Jan 14 01:32:46.326000 audit[4475]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb54137d0 a2=94 a3=2 items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.326000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.327000 audit: BPF prog-id=203 op=UNLOAD Jan 14 01:32:46.327000 audit[4475]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdb54137d0 a2=0 a3=2 items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.327000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.327000 audit: BPF prog-id=204 op=LOAD Jan 14 01:32:46.327000 audit[4475]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdb54138d0 a2=94 a3=30 items=0 ppid=4121 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.327000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:32:46.371000 audit: BPF prog-id=205 op=LOAD Jan 14 01:32:46.371000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe0d75e280 a2=98 a3=0 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.371000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:32:46.371000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe0d75e250 a3=0 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.371000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.372000 audit: BPF prog-id=206 op=LOAD Jan 14 01:32:46.372000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe0d75e070 a2=94 a3=54428f items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.372000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.372000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:32:46.372000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe0d75e070 a2=94 a3=54428f items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.372000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.372000 audit: BPF prog-id=207 op=LOAD Jan 14 01:32:46.372000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe0d75e0a0 a2=94 a3=2 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.372000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.372000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:32:46.372000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe0d75e0a0 a2=0 a3=2 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.372000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.437000 audit: BPF prog-id=208 op=LOAD Jan 14 01:32:46.439000 audit: BPF prog-id=209 op=LOAD Jan 14 01:32:46.439000 audit[4406]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=4396 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336539666262613638663663323734626365383966343733313530 Jan 14 01:32:46.451000 audit: BPF prog-id=209 op=UNLOAD Jan 14 01:32:46.451000 audit[4406]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4396 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336539666262613638663663323734626365383966343733313530 Jan 14 01:32:46.455000 audit: BPF prog-id=210 op=LOAD Jan 14 01:32:46.455000 audit[4406]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4396 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336539666262613638663663323734626365383966343733313530 Jan 14 01:32:46.456000 audit: BPF prog-id=211 op=LOAD Jan 14 01:32:46.456000 audit[4406]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4396 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336539666262613638663663323734626365383966343733313530 Jan 14 01:32:46.458000 audit: BPF prog-id=211 op=UNLOAD Jan 14 01:32:46.458000 audit[4406]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4396 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336539666262613638663663323734626365383966343733313530 Jan 14 01:32:46.458000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:32:46.458000 audit[4406]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4396 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336539666262613638663663323734626365383966343733313530 Jan 14 01:32:46.458000 audit: BPF prog-id=212 op=LOAD Jan 14 01:32:46.458000 audit[4406]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4396 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664336539666262613638663663323734626365383966343733313530 Jan 14 01:32:46.465030 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:32:46.482555 containerd[1601]: time="2026-01-14T01:32:46.482419875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c46b477-q67mc,Uid:ff2a83bd-ca30-4810-bc00-617909aaca25,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a45fa6da330c79e6d0c7d75dabe18e8925fa36258118238b02fae95a732b713d\"" Jan 14 01:32:46.490489 containerd[1601]: time="2026-01-14T01:32:46.490123349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:32:46.566152 containerd[1601]: time="2026-01-14T01:32:46.565637855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:32:46.582607 containerd[1601]: time="2026-01-14T01:32:46.579873851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:32:46.583269 containerd[1601]: time="2026-01-14T01:32:46.580285740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:32:46.586114 kubelet[2869]: E0114 01:32:46.585329 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:32:46.586114 kubelet[2869]: E0114 01:32:46.585381 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:32:46.586114 kubelet[2869]: E0114 01:32:46.585505 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47q8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c46b477-q67mc_calico-apiserver(ff2a83bd-ca30-4810-bc00-617909aaca25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:32:46.588011 kubelet[2869]: E0114 01:32:46.587708 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:32:46.648294 containerd[1601]: time="2026-01-14T01:32:46.647814676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c46b477-wkc27,Uid:c32ecf43-33bb-4f07-8af2-75af73cd7967,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fd3e9fbba68f6c274bce89f473150091ab266f83dad7f05bd8b8a84879e31b64\"" Jan 14 01:32:46.656259 containerd[1601]: time="2026-01-14T01:32:46.656159257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:32:46.713504 systemd-networkd[1498]: cali5dfcc949ddf: Link UP Jan 14 01:32:46.719044 systemd-networkd[1498]: cali5dfcc949ddf: Gained carrier Jan 14 01:32:46.734608 containerd[1601]: time="2026-01-14T01:32:46.734460769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:32:46.740368 containerd[1601]: time="2026-01-14T01:32:46.740318383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:32:46.740457 containerd[1601]: time="2026-01-14T01:32:46.740426283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:32:46.742452 kubelet[2869]: E0114 01:32:46.741609 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:32:46.742452 kubelet[2869]: E0114 01:32:46.741673 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:32:46.742452 kubelet[2869]: E0114 01:32:46.741832 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9f5rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c46b477-wkc27_calico-apiserver(c32ecf43-33bb-4f07-8af2-75af73cd7967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:32:46.743334 kubelet[2869]: E0114 01:32:46.743165 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:32:46.768856 containerd[1601]: 2026-01-14 01:32:46.400 [INFO][4418] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--5vwfg-eth0 goldmane-666569f655- calico-system 1821a0db-e895-49f0-8081-ae8dd6cf61e7 955 0 2026-01-14 01:31:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-5vwfg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5dfcc949ddf [] [] }} ContainerID="eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" Namespace="calico-system" Pod="goldmane-666569f655-5vwfg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5vwfg-" Jan 14 01:32:46.768856 containerd[1601]: 2026-01-14 01:32:46.401 [INFO][4418] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" Namespace="calico-system" Pod="goldmane-666569f655-5vwfg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5vwfg-eth0" Jan 14 01:32:46.768856 containerd[1601]: 2026-01-14 01:32:46.523 [INFO][4499] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" HandleID="k8s-pod-network.eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" Workload="localhost-k8s-goldmane--666569f655--5vwfg-eth0" Jan 14 01:32:46.769419 containerd[1601]: 2026-01-14 01:32:46.526 [INFO][4499] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" HandleID="k8s-pod-network.eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" Workload="localhost-k8s-goldmane--666569f655--5vwfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502990), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-5vwfg", "timestamp":"2026-01-14 01:32:46.523565817 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:32:46.769419 containerd[1601]: 2026-01-14 01:32:46.526 [INFO][4499] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:32:46.769419 containerd[1601]: 2026-01-14 01:32:46.526 [INFO][4499] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:32:46.769419 containerd[1601]: 2026-01-14 01:32:46.526 [INFO][4499] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:32:46.769419 containerd[1601]: 2026-01-14 01:32:46.556 [INFO][4499] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" host="localhost" Jan 14 01:32:46.769419 containerd[1601]: 2026-01-14 01:32:46.580 [INFO][4499] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:32:46.769419 containerd[1601]: 2026-01-14 01:32:46.603 [INFO][4499] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:32:46.769419 containerd[1601]: 2026-01-14 01:32:46.623 [INFO][4499] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:46.769419 containerd[1601]: 2026-01-14 01:32:46.629 [INFO][4499] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:46.769419 containerd[1601]: 2026-01-14 01:32:46.629 [INFO][4499] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" host="localhost" Jan 14 01:32:46.770112 containerd[1601]: 2026-01-14 01:32:46.636 [INFO][4499] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451 Jan 14 01:32:46.770112 containerd[1601]: 2026-01-14 01:32:46.657 [INFO][4499] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" host="localhost" Jan 14 01:32:46.770112 containerd[1601]: 2026-01-14 01:32:46.693 [INFO][4499] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" host="localhost" Jan 14 01:32:46.770112 containerd[1601]: 2026-01-14 01:32:46.695 [INFO][4499] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" host="localhost" Jan 14 01:32:46.770112 containerd[1601]: 2026-01-14 01:32:46.696 [INFO][4499] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:32:46.770112 containerd[1601]: 2026-01-14 01:32:46.697 [INFO][4499] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" HandleID="k8s-pod-network.eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" Workload="localhost-k8s-goldmane--666569f655--5vwfg-eth0" Jan 14 01:32:46.770443 containerd[1601]: 2026-01-14 01:32:46.706 [INFO][4418] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" Namespace="calico-system" Pod="goldmane-666569f655-5vwfg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5vwfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--5vwfg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1821a0db-e895-49f0-8081-ae8dd6cf61e7", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-5vwfg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5dfcc949ddf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:46.770443 containerd[1601]: 2026-01-14 01:32:46.706 [INFO][4418] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" Namespace="calico-system" Pod="goldmane-666569f655-5vwfg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5vwfg-eth0" Jan 14 01:32:46.770706 containerd[1601]: 2026-01-14 01:32:46.706 [INFO][4418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5dfcc949ddf ContainerID="eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" Namespace="calico-system" Pod="goldmane-666569f655-5vwfg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5vwfg-eth0" Jan 14 01:32:46.770706 containerd[1601]: 2026-01-14 01:32:46.721 [INFO][4418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" Namespace="calico-system" Pod="goldmane-666569f655-5vwfg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5vwfg-eth0" Jan 14 01:32:46.770786 containerd[1601]: 2026-01-14 01:32:46.723 [INFO][4418] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" Namespace="calico-system" Pod="goldmane-666569f655-5vwfg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5vwfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--5vwfg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1821a0db-e895-49f0-8081-ae8dd6cf61e7", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451", Pod:"goldmane-666569f655-5vwfg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5dfcc949ddf", MAC:"5a:04:5a:2f:35:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:46.771057 containerd[1601]: 2026-01-14 01:32:46.761 [INFO][4418] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" Namespace="calico-system" Pod="goldmane-666569f655-5vwfg" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--5vwfg-eth0" Jan 14 01:32:46.803414 kubelet[2869]: E0114 01:32:46.802740 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:32:46.805759 kubelet[2869]: E0114 01:32:46.805690 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:32:46.816041 kubelet[2869]: E0114 01:32:46.814721 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:32:46.822000 audit: BPF prog-id=213 op=LOAD Jan 14 01:32:46.822000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe0d75df60 a2=94 a3=1 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.822000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.822000 audit: BPF prog-id=213 op=UNLOAD Jan 14 01:32:46.822000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe0d75df60 a2=94 a3=1 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.822000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.836000 audit: BPF prog-id=214 op=LOAD Jan 14 01:32:46.836000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe0d75df50 a2=94 a3=4 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.836000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.840000 audit: BPF prog-id=214 op=UNLOAD Jan 14 01:32:46.840000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe0d75df50 a2=0 a3=4 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.840000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.840000 audit: BPF prog-id=215 op=LOAD Jan 14 01:32:46.840000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe0d75ddb0 a2=94 a3=5 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.840000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.840000 audit: BPF prog-id=215 op=UNLOAD Jan 14 01:32:46.840000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe0d75ddb0 a2=0 a3=5 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.840000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.841000 audit: BPF prog-id=216 op=LOAD Jan 14 01:32:46.841000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe0d75dfd0 a2=94 a3=6 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.841000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.841000 audit: BPF prog-id=216 op=UNLOAD Jan 14 01:32:46.841000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe0d75dfd0 a2=0 a3=6 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.841000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.841000 audit: BPF prog-id=217 op=LOAD Jan 14 01:32:46.841000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe0d75d780 a2=94 a3=88 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.841000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.841000 audit: BPF prog-id=218 op=LOAD Jan 14 01:32:46.841000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe0d75d600 a2=94 a3=2 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.841000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.841000 audit: BPF prog-id=218 op=UNLOAD Jan 14 01:32:46.841000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe0d75d630 a2=0 a3=7ffe0d75d730 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.841000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.846000 audit: BPF prog-id=217 op=UNLOAD Jan 14 01:32:46.846000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3d543d10 a2=0 a3=c80938dc5b063c72 items=0 ppid=4121 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.846000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:32:46.887865 containerd[1601]: time="2026-01-14T01:32:46.887053740Z" level=info msg="connecting to shim eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451" address="unix:///run/containerd/s/50278eed5230802ddbc78b920b4506720772734ccb6c0f68f4e18170092e378e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:32:46.896000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:32:46.896000 audit[4121]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000798440 a2=0 a3=0 items=0 ppid=4092 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.896000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:32:46.971000 audit[4568]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:32:46.971000 audit[4568]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb70a9fc0 a2=0 a3=7ffcb70a9fac items=0 ppid=3030 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:32:46.980000 audit[4568]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:32:46.980000 audit[4568]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcb70a9fc0 a2=0 a3=0 items=0 ppid=3030 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:46.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:32:46.995754 systemd-networkd[1498]: calicb18c26aeed: Link UP Jan 14 01:32:46.999402 systemd-networkd[1498]: calicb18c26aeed: Gained carrier Jan 14 01:32:47.052607 systemd[1]: Started cri-containerd-eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451.scope - libcontainer container eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451. Jan 14 01:32:47.051000 audit[4583]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:32:47.051000 audit[4583]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe76b102e0 a2=0 a3=7ffe76b102cc items=0 ppid=3030 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.051000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:32:47.068000 audit[4583]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:32:47.068000 audit[4583]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe76b102e0 a2=0 a3=0 items=0 ppid=3030 pid=4583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:32:47.092972 containerd[1601]: 2026-01-14 01:32:46.408 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0 calico-kube-controllers-546579f487- calico-system 35648de2-563a-403b-bdd1-f0409de12a27 950 0 2026-01-14 01:31:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:546579f487 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-546579f487-48d5w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicb18c26aeed [] [] }} ContainerID="7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" Namespace="calico-system" Pod="calico-kube-controllers-546579f487-48d5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546579f487--48d5w-" Jan 14 01:32:47.092972 containerd[1601]: 2026-01-14 01:32:46.409 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" Namespace="calico-system" Pod="calico-kube-controllers-546579f487-48d5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0" Jan 14 01:32:47.092972 containerd[1601]: 2026-01-14 01:32:46.542 [INFO][4505] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" HandleID="k8s-pod-network.7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" Workload="localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0" Jan 14 01:32:47.093345 containerd[1601]: 2026-01-14 01:32:46.545 [INFO][4505] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" HandleID="k8s-pod-network.7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" Workload="localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-546579f487-48d5w", "timestamp":"2026-01-14 01:32:46.542572254 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:32:47.093345 containerd[1601]: 2026-01-14 01:32:46.545 [INFO][4505] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:32:47.093345 containerd[1601]: 2026-01-14 01:32:46.696 [INFO][4505] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:32:47.093345 containerd[1601]: 2026-01-14 01:32:46.698 [INFO][4505] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:32:47.093345 containerd[1601]: 2026-01-14 01:32:46.735 [INFO][4505] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" host="localhost" Jan 14 01:32:47.093345 containerd[1601]: 2026-01-14 01:32:46.765 [INFO][4505] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:32:47.093345 containerd[1601]: 2026-01-14 01:32:46.791 [INFO][4505] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:32:47.093345 containerd[1601]: 2026-01-14 01:32:46.805 [INFO][4505] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:47.093345 containerd[1601]: 2026-01-14 01:32:46.820 [INFO][4505] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:47.093345 containerd[1601]: 2026-01-14 01:32:46.820 [INFO][4505] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" host="localhost" Jan 14 01:32:47.093829 containerd[1601]: 2026-01-14 01:32:46.853 [INFO][4505] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab Jan 14 01:32:47.093829 containerd[1601]: 2026-01-14 01:32:46.887 [INFO][4505] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" host="localhost" Jan 14 01:32:47.093829 containerd[1601]: 2026-01-14 01:32:46.918 [INFO][4505] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" host="localhost" Jan 14 01:32:47.093829 containerd[1601]: 2026-01-14 01:32:46.920 [INFO][4505] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" host="localhost" Jan 14 01:32:47.093829 containerd[1601]: 2026-01-14 01:32:46.923 [INFO][4505] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:32:47.093829 containerd[1601]: 2026-01-14 01:32:46.932 [INFO][4505] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" HandleID="k8s-pod-network.7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" Workload="localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0" Jan 14 01:32:47.094061 containerd[1601]: 2026-01-14 01:32:46.958 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" Namespace="calico-system" Pod="calico-kube-controllers-546579f487-48d5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0", GenerateName:"calico-kube-controllers-546579f487-", Namespace:"calico-system", SelfLink:"", UID:"35648de2-563a-403b-bdd1-f0409de12a27", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546579f487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-546579f487-48d5w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb18c26aeed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:47.094175 containerd[1601]: 2026-01-14 01:32:46.971 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" Namespace="calico-system" Pod="calico-kube-controllers-546579f487-48d5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0" Jan 14 01:32:47.094175 containerd[1601]: 2026-01-14 01:32:46.972 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb18c26aeed ContainerID="7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" Namespace="calico-system" Pod="calico-kube-controllers-546579f487-48d5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0" Jan 14 01:32:47.094175 containerd[1601]: 2026-01-14 01:32:47.000 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" Namespace="calico-system" Pod="calico-kube-controllers-546579f487-48d5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0" Jan 14 01:32:47.094369 containerd[1601]: 2026-01-14 01:32:47.001 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" Namespace="calico-system" Pod="calico-kube-controllers-546579f487-48d5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0", GenerateName:"calico-kube-controllers-546579f487-", Namespace:"calico-system", SelfLink:"", UID:"35648de2-563a-403b-bdd1-f0409de12a27", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546579f487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab", Pod:"calico-kube-controllers-546579f487-48d5w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicb18c26aeed", MAC:"5a:96:9f:66:29:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:47.094532 containerd[1601]: 2026-01-14 01:32:47.077 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" Namespace="calico-system" Pod="calico-kube-controllers-546579f487-48d5w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546579f487--48d5w-eth0" Jan 14 01:32:47.149000 audit: BPF prog-id=219 op=LOAD Jan 14 01:32:47.166487 kernel: kauditd_printk_skb: 270 callbacks suppressed Jan 14 01:32:47.166592 kernel: audit: type=1334 audit(1768354367.152:658): prog-id=220 op=LOAD Jan 14 01:32:47.152000 audit: BPF prog-id=220 op=LOAD Jan 14 01:32:47.152000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00013a238 a2=98 a3=0 items=0 ppid=4553 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562396561316362636338353331643135326362633536636431323833 Jan 14 01:32:47.228178 kernel: audit: type=1300 audit(1768354367.152:658): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00013a238 a2=98 a3=0 items=0 ppid=4553 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.228376 kernel: audit: type=1327 audit(1768354367.152:658): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562396561316362636338353331643135326362633536636431323833 Jan 14 01:32:47.228406 containerd[1601]: time="2026-01-14T01:32:47.215002431Z" level=info msg="connecting to shim 7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab" address="unix:///run/containerd/s/11d1c7b4542f9c63da1c427fc4ca45adec75bb23b02e6c0a00b7bcd513f2515c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:32:47.152000 audit: BPF prog-id=220 op=UNLOAD Jan 14 01:32:47.152000 audit[4566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4553 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.258109 kernel: audit: type=1334 audit(1768354367.152:659): prog-id=220 op=UNLOAD Jan 14 01:32:47.258255 kernel: audit: type=1300 audit(1768354367.152:659): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4553 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562396561316362636338353331643135326362633536636431323833 Jan 14 01:32:47.284024 kernel: audit: type=1327 audit(1768354367.152:659): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562396561316362636338353331643135326362633536636431323833 Jan 14 01:32:47.152000 audit: BPF prog-id=221 op=LOAD Jan 14 01:32:47.293374 kernel: audit: type=1334 audit(1768354367.152:660): prog-id=221 op=LOAD Jan 14 01:32:47.152000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00013a488 a2=98 a3=0 items=0 ppid=4553 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.319139 kernel: audit: type=1300 audit(1768354367.152:660): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00013a488 a2=98 a3=0 items=0 ppid=4553 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562396561316362636338353331643135326362633536636431323833 Jan 14 01:32:47.341335 kernel: audit: type=1327 audit(1768354367.152:660): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562396561316362636338353331643135326362633536636431323833 Jan 14 01:32:47.152000 audit: BPF prog-id=222 op=LOAD Jan 14 01:32:47.349992 kernel: audit: type=1334 audit(1768354367.152:661): prog-id=222 op=LOAD Jan 14 01:32:47.152000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00013a218 a2=98 a3=0 items=0 ppid=4553 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562396561316362636338353331643135326362633536636431323833 Jan 14 01:32:47.152000 audit: BPF prog-id=222 op=UNLOAD Jan 14 01:32:47.152000 audit[4566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4553 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562396561316362636338353331643135326362633536636431323833 Jan 14 01:32:47.152000 audit: BPF prog-id=221 op=UNLOAD Jan 14 01:32:47.152000 audit[4566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4553 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562396561316362636338353331643135326362633536636431323833 Jan 14 01:32:47.152000 audit: BPF prog-id=223 op=LOAD Jan 14 01:32:47.152000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00013a6e8 a2=98 a3=0 items=0 ppid=4553 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562396561316362636338353331643135326362633536636431323833 Jan 14 01:32:47.378590 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:32:47.417425 systemd-networkd[1498]: caliba08acaff58: Gained IPv6LL Jan 14 01:32:47.433147 systemd-networkd[1498]: calic8dd7ac1513: Link UP Jan 14 01:32:47.433671 systemd-networkd[1498]: calic8dd7ac1513: Gained carrier Jan 14 01:32:47.450000 audit[4645]: NETFILTER_CFG table=nat:127 family=2 entries=15 op=nft_register_chain pid=4645 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:32:47.450000 audit[4645]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffadd26630 a2=0 a3=7fffadd2661c items=0 ppid=4121 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.450000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:32:47.459655 systemd[1]: Started cri-containerd-7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab.scope - libcontainer container 7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab. Jan 14 01:32:47.469000 audit[4648]: NETFILTER_CFG table=raw:128 family=2 entries=21 op=nft_register_chain pid=4648 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:32:47.469000 audit[4648]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc89411b90 a2=0 a3=7ffc89411b7c items=0 ppid=4121 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.469000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:32:47.471000 audit[4647]: NETFILTER_CFG table=mangle:129 family=2 entries=16 op=nft_register_chain pid=4647 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:32:47.471000 audit[4647]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc62469b00 a2=0 a3=7ffc62469aec items=0 ppid=4121 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.471000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:32:47.481123 containerd[1601]: 2026-01-14 01:32:46.472 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9jt56-eth0 csi-node-driver- calico-system a92d2670-8bc7-4318-8d73-b12be2d0a45e 801 0 2026-01-14 01:31:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9jt56 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic8dd7ac1513 [] [] }} ContainerID="9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" Namespace="calico-system" Pod="csi-node-driver-9jt56" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jt56-" Jan 14 01:32:47.481123 containerd[1601]: 2026-01-14 01:32:46.472 [INFO][4441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" Namespace="calico-system" Pod="csi-node-driver-9jt56" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jt56-eth0" Jan 14 01:32:47.481123 containerd[1601]: 2026-01-14 01:32:46.667 [INFO][4518] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" HandleID="k8s-pod-network.9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" Workload="localhost-k8s-csi--node--driver--9jt56-eth0" Jan 14 01:32:47.481483 containerd[1601]: 2026-01-14 01:32:46.670 [INFO][4518] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" HandleID="k8s-pod-network.9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" Workload="localhost-k8s-csi--node--driver--9jt56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c9990), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9jt56", "timestamp":"2026-01-14 01:32:46.66785027 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:32:47.481483 containerd[1601]: 2026-01-14 01:32:46.671 [INFO][4518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:32:47.481483 containerd[1601]: 2026-01-14 01:32:46.926 [INFO][4518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:32:47.481483 containerd[1601]: 2026-01-14 01:32:46.927 [INFO][4518] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:32:47.481483 containerd[1601]: 2026-01-14 01:32:47.004 [INFO][4518] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" host="localhost" Jan 14 01:32:47.481483 containerd[1601]: 2026-01-14 01:32:47.080 [INFO][4518] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:32:47.481483 containerd[1601]: 2026-01-14 01:32:47.111 [INFO][4518] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:32:47.481483 containerd[1601]: 2026-01-14 01:32:47.124 [INFO][4518] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:47.481483 containerd[1601]: 2026-01-14 01:32:47.132 [INFO][4518] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:47.481483 containerd[1601]: 2026-01-14 01:32:47.132 [INFO][4518] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" host="localhost" Jan 14 01:32:47.481859 containerd[1601]: 2026-01-14 01:32:47.143 [INFO][4518] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e Jan 14 01:32:47.481859 containerd[1601]: 2026-01-14 01:32:47.169 [INFO][4518] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" host="localhost" Jan 14 01:32:47.481859 containerd[1601]: 2026-01-14 01:32:47.384 [INFO][4518] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" host="localhost" Jan 14 01:32:47.481859 containerd[1601]: 2026-01-14 01:32:47.385 [INFO][4518] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" host="localhost" Jan 14 01:32:47.481859 containerd[1601]: 2026-01-14 01:32:47.386 [INFO][4518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:32:47.481859 containerd[1601]: 2026-01-14 01:32:47.386 [INFO][4518] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" HandleID="k8s-pod-network.9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" Workload="localhost-k8s-csi--node--driver--9jt56-eth0" Jan 14 01:32:47.482117 containerd[1601]: 2026-01-14 01:32:47.417 [INFO][4441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" Namespace="calico-system" Pod="csi-node-driver-9jt56" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jt56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9jt56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a92d2670-8bc7-4318-8d73-b12be2d0a45e", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9jt56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8dd7ac1513", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:47.482296 containerd[1601]: 2026-01-14 01:32:47.418 [INFO][4441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" Namespace="calico-system" Pod="csi-node-driver-9jt56" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jt56-eth0" Jan 14 01:32:47.482296 containerd[1601]: 2026-01-14 01:32:47.418 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8dd7ac1513 ContainerID="9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" Namespace="calico-system" Pod="csi-node-driver-9jt56" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jt56-eth0" Jan 14 01:32:47.482296 containerd[1601]: 2026-01-14 01:32:47.431 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" Namespace="calico-system" Pod="csi-node-driver-9jt56" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jt56-eth0" Jan 14 01:32:47.482407 containerd[1601]: 2026-01-14 01:32:47.440 [INFO][4441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" Namespace="calico-system" Pod="csi-node-driver-9jt56" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jt56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9jt56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a92d2670-8bc7-4318-8d73-b12be2d0a45e", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e", Pod:"csi-node-driver-9jt56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic8dd7ac1513", MAC:"a2:35:50:2b:c8:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:47.482570 containerd[1601]: 2026-01-14 01:32:47.462 [INFO][4441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" Namespace="calico-system" Pod="csi-node-driver-9jt56" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jt56-eth0" Jan 14 01:32:47.538000 audit: BPF prog-id=224 op=LOAD Jan 14 01:32:47.539000 audit: BPF prog-id=225 op=LOAD Jan 14 01:32:47.539000 audit[4627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4615 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739353563383136366239636239313562346134393434643335303037 Jan 14 01:32:47.539000 audit: BPF prog-id=225 op=UNLOAD Jan 14 01:32:47.539000 audit[4627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4615 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739353563383136366239636239313562346134393434643335303037 Jan 14 01:32:47.539000 audit: BPF prog-id=226 op=LOAD Jan 14 01:32:47.539000 audit[4627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4615 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739353563383136366239636239313562346134393434643335303037 Jan 14 01:32:47.539000 audit: BPF prog-id=227 op=LOAD Jan 14 01:32:47.539000 audit[4627]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4615 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739353563383136366239636239313562346134393434643335303037 Jan 14 01:32:47.539000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:32:47.539000 audit[4627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4615 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739353563383136366239636239313562346134393434643335303037 Jan 14 01:32:47.539000 audit: BPF prog-id=226 op=UNLOAD Jan 14 01:32:47.539000 audit[4627]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4615 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739353563383136366239636239313562346134393434643335303037 Jan 14 01:32:47.539000 audit: BPF prog-id=228 op=LOAD Jan 14 01:32:47.539000 audit[4627]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4615 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739353563383136366239636239313562346134393434643335303037 Jan 14 01:32:47.510000 audit[4651]: NETFILTER_CFG table=filter:130 family=2 entries=165 op=nft_register_chain pid=4651 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:32:47.510000 audit[4651]: SYSCALL arch=c000003e syscall=46 success=yes exit=97412 a0=3 a1=7ffd5c8e0020 a2=0 a3=7ffd5c8e000c items=0 ppid=4121 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.510000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:32:47.544386 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:32:47.546505 systemd-networkd[1498]: vxlan.calico: Gained IPv6LL Jan 14 01:32:47.562506 containerd[1601]: time="2026-01-14T01:32:47.562349391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5vwfg,Uid:1821a0db-e895-49f0-8081-ae8dd6cf61e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb9ea1cbcc8531d152cbc56cd128344cab72686680ae5547a84095e263b4f451\"" Jan 14 01:32:47.585689 containerd[1601]: time="2026-01-14T01:32:47.585169793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:32:47.600031 containerd[1601]: time="2026-01-14T01:32:47.599784814Z" level=info msg="connecting to shim 9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e" address="unix:///run/containerd/s/fbb1dd0b0b68af434adb46a5b817029062fe3f426b7f6684207f9d64b921bf15" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:32:47.652000 audit[4706]: NETFILTER_CFG table=filter:131 family=2 entries=120 op=nft_register_chain pid=4706 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:32:47.652000 audit[4706]: SYSCALL arch=c000003e syscall=46 success=yes exit=66612 a0=3 a1=7ffd83b6bef0 a2=0 a3=7ffd83b6bedc items=0 ppid=4121 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.652000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:32:47.674097 containerd[1601]: time="2026-01-14T01:32:47.673854434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:32:47.674264 systemd-networkd[1498]: calif247713f823: Gained IPv6LL Jan 14 01:32:47.681452 systemd[1]: Started cri-containerd-9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e.scope - libcontainer container 9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e. Jan 14 01:32:47.684405 containerd[1601]: time="2026-01-14T01:32:47.683661760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546579f487-48d5w,Uid:35648de2-563a-403b-bdd1-f0409de12a27,Namespace:calico-system,Attempt:0,} returns sandbox id \"7955c8166b9cb915b4a4944d35007d642399056de7a86a287d7b0d62c01084ab\"" Jan 14 01:32:47.689387 containerd[1601]: time="2026-01-14T01:32:47.687022164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:32:47.689387 containerd[1601]: time="2026-01-14T01:32:47.687079360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:32:47.689489 kubelet[2869]: E0114 01:32:47.687506 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:32:47.689489 kubelet[2869]: E0114 01:32:47.687550 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:32:47.689489 kubelet[2869]: E0114 01:32:47.687690 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x25n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5vwfg_calico-system(1821a0db-e895-49f0-8081-ae8dd6cf61e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:32:47.689489 kubelet[2869]: E0114 01:32:47.689051 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:32:47.693172 containerd[1601]: time="2026-01-14T01:32:47.693143831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:32:47.714000 audit: BPF prog-id=229 op=LOAD Jan 14 01:32:47.717000 audit: BPF prog-id=230 op=LOAD Jan 14 01:32:47.717000 audit[4704]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4688 pid=4704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363231323661666131393963623732653835303062663432386363 Jan 14 01:32:47.717000 audit: BPF prog-id=230 op=UNLOAD Jan 14 01:32:47.717000 audit[4704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4688 pid=4704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363231323661666131393963623732653835303062663432386363 Jan 14 01:32:47.717000 audit: BPF prog-id=231 op=LOAD Jan 14 01:32:47.717000 audit[4704]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4688 pid=4704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363231323661666131393963623732653835303062663432386363 Jan 14 01:32:47.717000 audit: BPF prog-id=232 op=LOAD Jan 14 01:32:47.717000 audit[4704]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4688 pid=4704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363231323661666131393963623732653835303062663432386363 Jan 14 01:32:47.717000 audit: BPF prog-id=232 op=UNLOAD Jan 14 01:32:47.717000 audit[4704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4688 pid=4704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363231323661666131393963623732653835303062663432386363 Jan 14 01:32:47.717000 audit: BPF prog-id=231 op=UNLOAD Jan 14 01:32:47.717000 audit[4704]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4688 pid=4704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363231323661666131393963623732653835303062663432386363 Jan 14 01:32:47.717000 audit: BPF prog-id=233 op=LOAD Jan 14 01:32:47.717000 audit[4704]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4688 pid=4704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:47.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963363231323661666131393963623732653835303062663432386363 Jan 14 01:32:47.721495 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:32:47.759268 containerd[1601]: time="2026-01-14T01:32:47.758297117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jt56,Uid:a92d2670-8bc7-4318-8d73-b12be2d0a45e,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c62126afa199cb72e8500bf428cce65434133ffae65373ac0be1a99d209fe7e\"" Jan 14 01:32:47.775110 containerd[1601]: time="2026-01-14T01:32:47.774866430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:32:47.777994 containerd[1601]: time="2026-01-14T01:32:47.777464309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:32:47.778258 containerd[1601]: time="2026-01-14T01:32:47.778034777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:32:47.778702 kubelet[2869]: E0114 01:32:47.778575 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:32:47.778702 kubelet[2869]: E0114 01:32:47.778629 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:32:47.779320 kubelet[2869]: E0114 01:32:47.779098 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj6bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-546579f487-48d5w_calico-system(35648de2-563a-403b-bdd1-f0409de12a27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:32:47.780461 containerd[1601]: time="2026-01-14T01:32:47.780083326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:32:47.781099 kubelet[2869]: E0114 01:32:47.780846 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:32:47.814175 kubelet[2869]: E0114 01:32:47.813866 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:32:47.820113 kubelet[2869]: E0114 01:32:47.819785 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:32:47.821114 kubelet[2869]: E0114 01:32:47.820703 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:32:47.822089 kubelet[2869]: E0114 01:32:47.821742 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:32:47.849311 containerd[1601]: time="2026-01-14T01:32:47.849185332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:32:47.853609 containerd[1601]: time="2026-01-14T01:32:47.853351720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:32:47.854107 containerd[1601]: time="2026-01-14T01:32:47.853604360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:32:47.854466 kubelet[2869]: E0114 01:32:47.854425 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:32:47.855383 kubelet[2869]: E0114 01:32:47.854870 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:32:47.855383 kubelet[2869]: E0114 01:32:47.855187 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7hrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:32:47.860827 containerd[1601]: time="2026-01-14T01:32:47.860497174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:32:47.930171 systemd-networkd[1498]: cali5dfcc949ddf: Gained IPv6LL Jan 14 01:32:47.946665 containerd[1601]: time="2026-01-14T01:32:47.945418788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:32:47.951290 containerd[1601]: time="2026-01-14T01:32:47.950849905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:32:47.951290 containerd[1601]: time="2026-01-14T01:32:47.951094642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:32:47.952563 kubelet[2869]: E0114 01:32:47.952490 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:32:47.952708 kubelet[2869]: E0114 01:32:47.952686 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:32:47.954668 kubelet[2869]: E0114 01:32:47.954550 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7hrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:32:47.958148 kubelet[2869]: E0114 01:32:47.957547 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:32:48.100353 kubelet[2869]: E0114 01:32:48.100039 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:48.105836 kubelet[2869]: E0114 01:32:48.105505 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:48.109081 containerd[1601]: time="2026-01-14T01:32:48.107812260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-68nvj,Uid:898e39c7-945e-4928-a3eb-790aff1d14eb,Namespace:kube-system,Attempt:0,}" Jan 14 01:32:48.119340 containerd[1601]: time="2026-01-14T01:32:48.117787222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcn4k,Uid:277e32d3-813e-4a52-82ac-39307655fe89,Namespace:kube-system,Attempt:0,}" Jan 14 01:32:48.249000 audit[4740]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=4740 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:32:48.249000 audit[4740]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7aedc1d0 a2=0 a3=7ffe7aedc1bc items=0 ppid=3030 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:48.249000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:32:48.276000 audit[4740]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=4740 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:32:48.276000 audit[4740]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe7aedc1d0 a2=0 a3=0 items=0 ppid=3030 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:48.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:32:48.599630 systemd-networkd[1498]: cali652affa2cfe: Link UP Jan 14 01:32:48.602642 systemd-networkd[1498]: cali652affa2cfe: Gained carrier Jan 14 01:32:48.633066 systemd-networkd[1498]: calicb18c26aeed: Gained IPv6LL Jan 14 01:32:48.636620 containerd[1601]: 2026-01-14 01:32:48.392 [INFO][4738] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--68nvj-eth0 coredns-674b8bbfcf- kube-system 898e39c7-945e-4928-a3eb-790aff1d14eb 946 0 2026-01-14 01:31:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-68nvj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali652affa2cfe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" Namespace="kube-system" Pod="coredns-674b8bbfcf-68nvj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--68nvj-" Jan 14 01:32:48.636620 containerd[1601]: 2026-01-14 01:32:48.394 [INFO][4738] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" Namespace="kube-system" Pod="coredns-674b8bbfcf-68nvj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--68nvj-eth0" Jan 14 01:32:48.636620 containerd[1601]: 2026-01-14 01:32:48.491 [INFO][4769] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" HandleID="k8s-pod-network.ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" Workload="localhost-k8s-coredns--674b8bbfcf--68nvj-eth0" Jan 14 01:32:48.640021 containerd[1601]: 2026-01-14 01:32:48.492 [INFO][4769] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" HandleID="k8s-pod-network.ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" Workload="localhost-k8s-coredns--674b8bbfcf--68nvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df060), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-68nvj", "timestamp":"2026-01-14 01:32:48.491397747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:32:48.640021 containerd[1601]: 2026-01-14 01:32:48.492 [INFO][4769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:32:48.640021 containerd[1601]: 2026-01-14 01:32:48.492 [INFO][4769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:32:48.640021 containerd[1601]: 2026-01-14 01:32:48.492 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:32:48.640021 containerd[1601]: 2026-01-14 01:32:48.507 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" host="localhost" Jan 14 01:32:48.640021 containerd[1601]: 2026-01-14 01:32:48.524 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:32:48.640021 containerd[1601]: 2026-01-14 01:32:48.536 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:32:48.640021 containerd[1601]: 2026-01-14 01:32:48.540 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:48.640021 containerd[1601]: 2026-01-14 01:32:48.548 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:48.640021 containerd[1601]: 2026-01-14 01:32:48.548 [INFO][4769] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" host="localhost" Jan 14 01:32:48.640673 containerd[1601]: 2026-01-14 01:32:48.555 [INFO][4769] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e Jan 14 01:32:48.640673 containerd[1601]: 2026-01-14 01:32:48.565 [INFO][4769] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" host="localhost" Jan 14 01:32:48.640673 containerd[1601]: 2026-01-14 01:32:48.583 [INFO][4769] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" host="localhost" Jan 14 01:32:48.640673 containerd[1601]: 2026-01-14 01:32:48.583 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" host="localhost" Jan 14 01:32:48.640673 containerd[1601]: 2026-01-14 01:32:48.583 [INFO][4769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:32:48.640673 containerd[1601]: 2026-01-14 01:32:48.583 [INFO][4769] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" HandleID="k8s-pod-network.ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" Workload="localhost-k8s-coredns--674b8bbfcf--68nvj-eth0" Jan 14 01:32:48.640857 containerd[1601]: 2026-01-14 01:32:48.590 [INFO][4738] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" Namespace="kube-system" Pod="coredns-674b8bbfcf-68nvj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--68nvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--68nvj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"898e39c7-945e-4928-a3eb-790aff1d14eb", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-68nvj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali652affa2cfe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:48.643152 containerd[1601]: 2026-01-14 01:32:48.590 [INFO][4738] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" Namespace="kube-system" Pod="coredns-674b8bbfcf-68nvj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--68nvj-eth0" Jan 14 01:32:48.643152 containerd[1601]: 2026-01-14 01:32:48.590 [INFO][4738] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali652affa2cfe ContainerID="ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" Namespace="kube-system" Pod="coredns-674b8bbfcf-68nvj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--68nvj-eth0" Jan 14 01:32:48.643152 containerd[1601]: 2026-01-14 01:32:48.604 [INFO][4738] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" Namespace="kube-system" Pod="coredns-674b8bbfcf-68nvj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--68nvj-eth0" Jan 14 01:32:48.645031 containerd[1601]: 2026-01-14 01:32:48.616 [INFO][4738] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" Namespace="kube-system" Pod="coredns-674b8bbfcf-68nvj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--68nvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--68nvj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"898e39c7-945e-4928-a3eb-790aff1d14eb", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e", Pod:"coredns-674b8bbfcf-68nvj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali652affa2cfe", MAC:"6a:e9:b8:7b:52:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:48.645031 containerd[1601]: 2026-01-14 01:32:48.630 [INFO][4738] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" Namespace="kube-system" Pod="coredns-674b8bbfcf-68nvj" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--68nvj-eth0" Jan 14 01:32:48.721579 containerd[1601]: time="2026-01-14T01:32:48.721344439Z" level=info msg="connecting to shim ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e" address="unix:///run/containerd/s/ebc87d1aaad1c97f9531545845ff092a5f69e4be7b6a8199858fd910034a4156" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:32:48.739000 audit[4805]: NETFILTER_CFG table=filter:134 family=2 entries=54 op=nft_register_chain pid=4805 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:32:48.739000 audit[4805]: SYSCALL arch=c000003e syscall=46 success=yes exit=26084 a0=3 a1=7ffee54cd2c0 a2=0 a3=7ffee54cd2ac items=0 ppid=4121 pid=4805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:48.739000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:32:48.755815 systemd-networkd[1498]: califbfa94ab514: Link UP Jan 14 01:32:48.763039 systemd-networkd[1498]: calic8dd7ac1513: Gained IPv6LL Jan 14 01:32:48.765157 systemd-networkd[1498]: califbfa94ab514: Gained carrier Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.404 [INFO][4746] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0 coredns-674b8bbfcf- kube-system 277e32d3-813e-4a52-82ac-39307655fe89 949 0 2026-01-14 01:31:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-dcn4k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califbfa94ab514 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dcn4k-" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.405 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.493 [INFO][4775] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" HandleID="k8s-pod-network.cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" Workload="localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.494 [INFO][4775] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" HandleID="k8s-pod-network.cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" Workload="localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000137b50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-dcn4k", "timestamp":"2026-01-14 01:32:48.493852189 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.494 [INFO][4775] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.583 [INFO][4775] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.583 [INFO][4775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.609 [INFO][4775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" host="localhost" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.654 [INFO][4775] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.669 [INFO][4775] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.676 [INFO][4775] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.685 [INFO][4775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.685 [INFO][4775] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" host="localhost" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.690 [INFO][4775] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813 Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.703 [INFO][4775] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" host="localhost" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.739 [INFO][4775] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" host="localhost" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.740 [INFO][4775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" host="localhost" Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.740 [INFO][4775] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:32:48.821761 containerd[1601]: 2026-01-14 01:32:48.741 [INFO][4775] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" HandleID="k8s-pod-network.cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" Workload="localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0" Jan 14 01:32:48.826613 containerd[1601]: 2026-01-14 01:32:48.745 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"277e32d3-813e-4a52-82ac-39307655fe89", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-dcn4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbfa94ab514", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:48.826613 containerd[1601]: 2026-01-14 01:32:48.745 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0" Jan 14 01:32:48.826613 containerd[1601]: 2026-01-14 01:32:48.745 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califbfa94ab514 ContainerID="cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0" Jan 14 01:32:48.826613 containerd[1601]: 2026-01-14 01:32:48.769 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0" Jan 14 01:32:48.826613 containerd[1601]: 2026-01-14 01:32:48.772 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"277e32d3-813e-4a52-82ac-39307655fe89", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813", Pod:"coredns-674b8bbfcf-dcn4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califbfa94ab514", MAC:"32:8e:5d:e1:8c:01", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:32:48.826613 containerd[1601]: 2026-01-14 01:32:48.810 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcn4k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dcn4k-eth0" Jan 14 01:32:48.827831 systemd[1]: Started cri-containerd-ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e.scope - libcontainer container ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e. Jan 14 01:32:48.836449 kubelet[2869]: E0114 01:32:48.836379 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:32:48.838733 kubelet[2869]: E0114 01:32:48.836596 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:32:48.843876 kubelet[2869]: E0114 01:32:48.843814 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:32:48.879000 audit: BPF prog-id=234 op=LOAD Jan 14 01:32:48.883000 audit: BPF prog-id=235 op=LOAD Jan 14 01:32:48.883000 audit[4816]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4804 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:48.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303765356466633565373331343136373464323434386165366433 Jan 14 01:32:48.884000 audit: BPF prog-id=235 op=UNLOAD Jan 14 01:32:48.884000 audit[4816]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4804 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:48.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303765356466633565373331343136373464323434386165366433 Jan 14 01:32:48.884000 audit: BPF prog-id=236 op=LOAD Jan 14 01:32:48.884000 audit[4816]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4804 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:48.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303765356466633565373331343136373464323434386165366433 Jan 14 01:32:48.884000 audit: BPF prog-id=237 op=LOAD Jan 14 01:32:48.884000 audit[4816]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4804 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:48.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303765356466633565373331343136373464323434386165366433 Jan 14 01:32:48.884000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:32:48.884000 audit[4816]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4804 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:48.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303765356466633565373331343136373464323434386165366433 Jan 14 01:32:48.884000 audit: BPF prog-id=236 op=UNLOAD Jan 14 01:32:48.884000 audit[4816]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4804 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:48.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303765356466633565373331343136373464323434386165366433 Jan 14 01:32:48.884000 audit: BPF prog-id=238 op=LOAD Jan 14 01:32:48.884000 audit[4816]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4804 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:48.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565303765356466633565373331343136373464323434386165366433 Jan 14 01:32:48.899774 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:32:48.918000 audit[4853]: NETFILTER_CFG table=filter:135 family=2 entries=54 op=nft_register_chain pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:32:48.918000 audit[4853]: SYSCALL arch=c000003e syscall=46 success=yes exit=25540 a0=3 a1=7fff3f989420 a2=0 a3=7fff3f98940c items=0 ppid=4121 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:48.918000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:32:48.922427 containerd[1601]: time="2026-01-14T01:32:48.922336004Z" level=info msg="connecting to shim cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813" address="unix:///run/containerd/s/040871e8ede7aa1c5e4170e32bfe4ad342385757c2c8c42f12e724a7b118dbdf" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:32:49.018633 systemd[1]: Started cri-containerd-cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813.scope - libcontainer container cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813. Jan 14 01:32:49.030448 containerd[1601]: time="2026-01-14T01:32:49.030353830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-68nvj,Uid:898e39c7-945e-4928-a3eb-790aff1d14eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e\"" Jan 14 01:32:49.033364 kubelet[2869]: E0114 01:32:49.033294 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:49.050659 containerd[1601]: time="2026-01-14T01:32:49.050578996Z" level=info msg="CreateContainer within sandbox \"ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:32:49.072000 audit: BPF prog-id=239 op=LOAD Jan 14 01:32:49.074000 audit: BPF prog-id=240 op=LOAD Jan 14 01:32:49.074000 audit[4866]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4856 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366431613232653432636137373630643437383837303335336562 Jan 14 01:32:49.074000 audit: BPF prog-id=240 op=UNLOAD Jan 14 01:32:49.074000 audit[4866]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366431613232653432636137373630643437383837303335336562 Jan 14 01:32:49.074000 audit: BPF prog-id=241 op=LOAD Jan 14 01:32:49.074000 audit[4866]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4856 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366431613232653432636137373630643437383837303335336562 Jan 14 01:32:49.074000 audit: BPF prog-id=242 op=LOAD Jan 14 01:32:49.074000 audit[4866]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4856 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366431613232653432636137373630643437383837303335336562 Jan 14 01:32:49.074000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:32:49.074000 audit[4866]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366431613232653432636137373630643437383837303335336562 Jan 14 01:32:49.074000 audit: BPF prog-id=241 op=UNLOAD Jan 14 01:32:49.074000 audit[4866]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366431613232653432636137373630643437383837303335336562 Jan 14 01:32:49.075000 audit: BPF prog-id=243 op=LOAD Jan 14 01:32:49.075000 audit[4866]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4856 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364366431613232653432636137373630643437383837303335336562 Jan 14 01:32:49.084107 systemd-resolved[1286]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:32:49.090354 containerd[1601]: time="2026-01-14T01:32:49.090116047Z" level=info msg="Container 2691b410e00371a000018f7bdd6a56762327ed60300b08cb943eb079df21550d: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:32:49.108811 containerd[1601]: time="2026-01-14T01:32:49.108678455Z" level=info msg="CreateContainer within sandbox \"ee07e5dfc5e73141674d2448ae6d39881afaa87750cdceceeaaa346b1036687e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2691b410e00371a000018f7bdd6a56762327ed60300b08cb943eb079df21550d\"" Jan 14 01:32:49.110620 containerd[1601]: time="2026-01-14T01:32:49.110463508Z" level=info msg="StartContainer for \"2691b410e00371a000018f7bdd6a56762327ed60300b08cb943eb079df21550d\"" Jan 14 01:32:49.117020 containerd[1601]: time="2026-01-14T01:32:49.116831883Z" level=info msg="connecting to shim 2691b410e00371a000018f7bdd6a56762327ed60300b08cb943eb079df21550d" address="unix:///run/containerd/s/ebc87d1aaad1c97f9531545845ff092a5f69e4be7b6a8199858fd910034a4156" protocol=ttrpc version=3 Jan 14 01:32:49.162795 containerd[1601]: time="2026-01-14T01:32:49.162001979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcn4k,Uid:277e32d3-813e-4a52-82ac-39307655fe89,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813\"" Jan 14 01:32:49.165884 kubelet[2869]: E0114 01:32:49.165093 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:49.168485 systemd[1]: Started cri-containerd-2691b410e00371a000018f7bdd6a56762327ed60300b08cb943eb079df21550d.scope - libcontainer container 2691b410e00371a000018f7bdd6a56762327ed60300b08cb943eb079df21550d. Jan 14 01:32:49.186150 containerd[1601]: time="2026-01-14T01:32:49.185860429Z" level=info msg="CreateContainer within sandbox \"cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:32:49.255026 containerd[1601]: time="2026-01-14T01:32:49.253169518Z" level=info msg="Container 8e3e8b8ddbe7e868346065167e73bc755b331f2efb46f14175b6e3da6f544763: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:32:49.268000 audit: BPF prog-id=244 op=LOAD Jan 14 01:32:49.272000 audit: BPF prog-id=245 op=LOAD Jan 14 01:32:49.272000 audit[4891]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4804 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393162343130653030333731613030303031386637626464366135 Jan 14 01:32:49.272000 audit: BPF prog-id=245 op=UNLOAD Jan 14 01:32:49.272000 audit[4891]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4804 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393162343130653030333731613030303031386637626464366135 Jan 14 01:32:49.272000 audit: BPF prog-id=246 op=LOAD Jan 14 01:32:49.272000 audit[4891]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4804 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393162343130653030333731613030303031386637626464366135 Jan 14 01:32:49.272000 audit: BPF prog-id=247 op=LOAD Jan 14 01:32:49.272000 audit[4891]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4804 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393162343130653030333731613030303031386637626464366135 Jan 14 01:32:49.272000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:32:49.272000 audit[4891]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4804 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393162343130653030333731613030303031386637626464366135 Jan 14 01:32:49.272000 audit: BPF prog-id=246 op=UNLOAD Jan 14 01:32:49.272000 audit[4891]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4804 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393162343130653030333731613030303031386637626464366135 Jan 14 01:32:49.272000 audit: BPF prog-id=248 op=LOAD Jan 14 01:32:49.272000 audit[4891]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4804 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393162343130653030333731613030303031386637626464366135 Jan 14 01:32:49.278733 containerd[1601]: time="2026-01-14T01:32:49.278693073Z" level=info msg="CreateContainer within sandbox \"cd6d1a22e42ca7760d478870353eb8307cbc2ce68885c4b34dda9ebb76342813\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e3e8b8ddbe7e868346065167e73bc755b331f2efb46f14175b6e3da6f544763\"" Jan 14 01:32:49.282026 containerd[1601]: time="2026-01-14T01:32:49.281508793Z" level=info msg="StartContainer for \"8e3e8b8ddbe7e868346065167e73bc755b331f2efb46f14175b6e3da6f544763\"" Jan 14 01:32:49.286177 containerd[1601]: time="2026-01-14T01:32:49.285854868Z" level=info msg="connecting to shim 8e3e8b8ddbe7e868346065167e73bc755b331f2efb46f14175b6e3da6f544763" address="unix:///run/containerd/s/040871e8ede7aa1c5e4170e32bfe4ad342385757c2c8c42f12e724a7b118dbdf" protocol=ttrpc version=3 Jan 14 01:32:49.344550 systemd[1]: Started cri-containerd-8e3e8b8ddbe7e868346065167e73bc755b331f2efb46f14175b6e3da6f544763.scope - libcontainer container 8e3e8b8ddbe7e868346065167e73bc755b331f2efb46f14175b6e3da6f544763. Jan 14 01:32:49.394389 containerd[1601]: time="2026-01-14T01:32:49.394336446Z" level=info msg="StartContainer for \"2691b410e00371a000018f7bdd6a56762327ed60300b08cb943eb079df21550d\" returns successfully" Jan 14 01:32:49.407000 audit: BPF prog-id=249 op=LOAD Jan 14 01:32:49.409000 audit: BPF prog-id=250 op=LOAD Jan 14 01:32:49.409000 audit[4918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4856 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865336538623864646265376538363833343630363531363765373362 Jan 14 01:32:49.409000 audit: BPF prog-id=250 op=UNLOAD Jan 14 01:32:49.409000 audit[4918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865336538623864646265376538363833343630363531363765373362 Jan 14 01:32:49.410000 audit: BPF prog-id=251 op=LOAD Jan 14 01:32:49.410000 audit[4918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4856 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865336538623864646265376538363833343630363531363765373362 Jan 14 01:32:49.410000 audit: BPF prog-id=252 op=LOAD Jan 14 01:32:49.410000 audit[4918]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4856 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865336538623864646265376538363833343630363531363765373362 Jan 14 01:32:49.410000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:32:49.410000 audit[4918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865336538623864646265376538363833343630363531363765373362 Jan 14 01:32:49.410000 audit: BPF prog-id=251 op=UNLOAD Jan 14 01:32:49.410000 audit[4918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865336538623864646265376538363833343630363531363765373362 Jan 14 01:32:49.411000 audit: BPF prog-id=253 op=LOAD Jan 14 01:32:49.411000 audit[4918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4856 pid=4918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865336538623864646265376538363833343630363531363765373362 Jan 14 01:32:49.488432 containerd[1601]: time="2026-01-14T01:32:49.488311856Z" level=info msg="StartContainer for \"8e3e8b8ddbe7e868346065167e73bc755b331f2efb46f14175b6e3da6f544763\" returns successfully" Jan 14 01:32:49.694684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2463895195.mount: Deactivated successfully. Jan 14 01:32:49.844409 kubelet[2869]: E0114 01:32:49.843478 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:49.864173 kubelet[2869]: E0114 01:32:49.864123 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:49.935168 kubelet[2869]: I0114 01:32:49.931817 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-68nvj" podStartSLOduration=88.931738065 podStartE2EDuration="1m28.931738065s" podCreationTimestamp="2026-01-14 01:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:32:49.888015456 +0000 UTC m=+94.537287067" watchObservedRunningTime="2026-01-14 01:32:49.931738065 +0000 UTC m=+94.581009646" Jan 14 01:32:49.941000 audit[4970]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:32:49.941000 audit[4970]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcfe1930e0 a2=0 a3=7ffcfe1930cc items=0 ppid=3030 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:32:49.948000 audit[4970]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:32:49.948000 audit[4970]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcfe1930e0 a2=0 a3=0 items=0 ppid=3030 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:49.948000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:32:50.069000 audit[4972]: NETFILTER_CFG table=filter:138 family=2 entries=17 op=nft_register_rule pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:32:50.069000 audit[4972]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe49326200 a2=0 a3=7ffe493261ec items=0 ppid=3030 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:50.069000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:32:50.104000 audit[4972]: NETFILTER_CFG table=nat:139 family=2 entries=47 op=nft_register_chain pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:32:50.104000 audit[4972]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe49326200 a2=0 a3=7ffe493261ec items=0 ppid=3030 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:32:50.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:32:50.360525 systemd-networkd[1498]: califbfa94ab514: Gained IPv6LL Jan 14 01:32:50.620689 systemd-networkd[1498]: cali652affa2cfe: Gained IPv6LL Jan 14 01:32:50.877637 kubelet[2869]: E0114 01:32:50.876513 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:50.877637 kubelet[2869]: E0114 01:32:50.876562 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:51.101360 kubelet[2869]: E0114 01:32:51.098288 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:51.884726 kubelet[2869]: E0114 01:32:51.884385 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:51.884726 kubelet[2869]: E0114 01:32:51.884517 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:32:59.115027 containerd[1601]: time="2026-01-14T01:32:59.113580807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:32:59.216738 containerd[1601]: time="2026-01-14T01:32:59.216250743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:32:59.227613 containerd[1601]: time="2026-01-14T01:32:59.227233130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:32:59.227613 containerd[1601]: time="2026-01-14T01:32:59.227487315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:32:59.231799 kubelet[2869]: E0114 01:32:59.228260 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:32:59.231799 kubelet[2869]: E0114 01:32:59.228461 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:32:59.231799 kubelet[2869]: E0114 01:32:59.228631 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47q8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c46b477-q67mc_calico-apiserver(ff2a83bd-ca30-4810-bc00-617909aaca25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:32:59.236528 kubelet[2869]: E0114 01:32:59.233731 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:33:00.142446 containerd[1601]: time="2026-01-14T01:33:00.140267190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:33:00.243302 kubelet[2869]: I0114 01:33:00.237828 2869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dcn4k" podStartSLOduration=99.237803666 podStartE2EDuration="1m39.237803666s" podCreationTimestamp="2026-01-14 01:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:32:49.966324384 +0000 UTC m=+94.615595965" watchObservedRunningTime="2026-01-14 01:33:00.237803666 +0000 UTC m=+104.887075247" Jan 14 01:33:00.285688 containerd[1601]: time="2026-01-14T01:33:00.284066969Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:00.291617 containerd[1601]: time="2026-01-14T01:33:00.291480003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:33:00.291617 containerd[1601]: time="2026-01-14T01:33:00.291596750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:00.294815 kubelet[2869]: E0114 01:33:00.294493 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:33:00.294815 kubelet[2869]: E0114 01:33:00.294622 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:33:00.294815 kubelet[2869]: E0114 01:33:00.294785 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e5d51c992c994bfdbf53b4556ecb9a0e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b769697d-jcx4g_calico-system(1f7ed930-9020-4e7b-a11b-c469857f7fe1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:00.301700 containerd[1601]: time="2026-01-14T01:33:00.301565693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:33:00.407436 containerd[1601]: time="2026-01-14T01:33:00.399535168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:00.411668 containerd[1601]: time="2026-01-14T01:33:00.410537417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:33:00.411668 containerd[1601]: time="2026-01-14T01:33:00.410599982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:00.413109 kubelet[2869]: E0114 01:33:00.412186 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:33:00.413109 kubelet[2869]: E0114 01:33:00.412246 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:33:00.413109 kubelet[2869]: E0114 01:33:00.412457 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b769697d-jcx4g_calico-system(1f7ed930-9020-4e7b-a11b-c469857f7fe1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:00.414066 kubelet[2869]: E0114 01:33:00.413753 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:33:01.102571 kubelet[2869]: E0114 01:33:01.099710 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:33:01.108774 containerd[1601]: time="2026-01-14T01:33:01.108700561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:33:01.216121 containerd[1601]: time="2026-01-14T01:33:01.216017372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:01.226543 containerd[1601]: time="2026-01-14T01:33:01.226227351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:33:01.226543 containerd[1601]: time="2026-01-14T01:33:01.226418067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:01.227420 kubelet[2869]: E0114 01:33:01.226849 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:33:01.227420 kubelet[2869]: E0114 01:33:01.227234 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:33:01.229256 kubelet[2869]: E0114 01:33:01.229174 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9f5rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c46b477-wkc27_calico-apiserver(c32ecf43-33bb-4f07-8af2-75af73cd7967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:01.231143 kubelet[2869]: E0114 01:33:01.231019 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:33:03.120297 containerd[1601]: time="2026-01-14T01:33:03.119312729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:33:03.244336 containerd[1601]: time="2026-01-14T01:33:03.244101734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:03.256019 containerd[1601]: time="2026-01-14T01:33:03.249453886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:33:03.256019 containerd[1601]: time="2026-01-14T01:33:03.249545146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:03.256181 kubelet[2869]: E0114 01:33:03.253852 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:33:03.256181 kubelet[2869]: E0114 01:33:03.254042 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:33:03.256728 kubelet[2869]: E0114 01:33:03.254747 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj6bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-546579f487-48d5w_calico-system(35648de2-563a-403b-bdd1-f0409de12a27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:03.257148 containerd[1601]: time="2026-01-14T01:33:03.256757389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:33:03.257606 kubelet[2869]: E0114 01:33:03.257327 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:33:03.347784 containerd[1601]: time="2026-01-14T01:33:03.347227668Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:03.363675 containerd[1601]: time="2026-01-14T01:33:03.359311422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:33:03.363675 containerd[1601]: time="2026-01-14T01:33:03.359480882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:03.363675 containerd[1601]: time="2026-01-14T01:33:03.363467568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:33:03.364141 kubelet[2869]: E0114 01:33:03.360066 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:33:03.364141 kubelet[2869]: E0114 01:33:03.360125 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:33:03.364141 kubelet[2869]: E0114 01:33:03.360261 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7hrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:03.488328 containerd[1601]: time="2026-01-14T01:33:03.477861152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:03.503967 containerd[1601]: time="2026-01-14T01:33:03.503534594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:33:03.505089 containerd[1601]: time="2026-01-14T01:33:03.503613682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:03.508858 kubelet[2869]: E0114 01:33:03.506587 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:33:03.508858 kubelet[2869]: E0114 01:33:03.506699 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:33:03.508858 kubelet[2869]: E0114 01:33:03.506845 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7hrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:03.511112 kubelet[2869]: E0114 01:33:03.510836 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:33:04.105113 containerd[1601]: time="2026-01-14T01:33:04.102198582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:33:04.255083 containerd[1601]: time="2026-01-14T01:33:04.254886622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:04.262125 containerd[1601]: time="2026-01-14T01:33:04.261803139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:33:04.262125 containerd[1601]: time="2026-01-14T01:33:04.262034150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:04.264427 kubelet[2869]: E0114 01:33:04.263494 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:33:04.264427 kubelet[2869]: E0114 01:33:04.263632 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:33:04.264427 kubelet[2869]: E0114 01:33:04.264033 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x25n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5vwfg_calico-system(1821a0db-e895-49f0-8081-ae8dd6cf61e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:04.266184 kubelet[2869]: E0114 01:33:04.266128 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:33:10.109195 kubelet[2869]: E0114 01:33:10.107235 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:33:12.113592 kubelet[2869]: E0114 01:33:12.113067 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:33:13.128532 kubelet[2869]: E0114 01:33:13.122531 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:33:14.097254 kubelet[2869]: E0114 01:33:14.097083 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:33:14.102189 kubelet[2869]: E0114 01:33:14.100238 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:33:14.487823 kubelet[2869]: E0114 01:33:14.485213 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:33:16.113192 kubelet[2869]: E0114 01:33:16.112576 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:33:19.108094 kubelet[2869]: E0114 01:33:19.106847 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:33:23.135196 containerd[1601]: time="2026-01-14T01:33:23.134556260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:33:23.295830 containerd[1601]: time="2026-01-14T01:33:23.294194552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:23.312337 containerd[1601]: time="2026-01-14T01:33:23.309069859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:33:23.312337 containerd[1601]: time="2026-01-14T01:33:23.309149136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:23.316319 kubelet[2869]: E0114 01:33:23.309238 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:33:23.316319 kubelet[2869]: E0114 01:33:23.309287 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:33:23.316319 kubelet[2869]: E0114 01:33:23.309525 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47q8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c46b477-q67mc_calico-apiserver(ff2a83bd-ca30-4810-bc00-617909aaca25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:23.316319 kubelet[2869]: E0114 01:33:23.311227 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:33:25.106773 containerd[1601]: time="2026-01-14T01:33:25.106576224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:33:25.230812 containerd[1601]: time="2026-01-14T01:33:25.230753744Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:25.242714 containerd[1601]: time="2026-01-14T01:33:25.242641287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:33:25.243246 containerd[1601]: time="2026-01-14T01:33:25.242687475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:25.246689 kubelet[2869]: E0114 01:33:25.245552 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:33:25.250181 kubelet[2869]: E0114 01:33:25.247837 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:33:25.250181 kubelet[2869]: E0114 01:33:25.248152 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e5d51c992c994bfdbf53b4556ecb9a0e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b769697d-jcx4g_calico-system(1f7ed930-9020-4e7b-a11b-c469857f7fe1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:25.263864 containerd[1601]: time="2026-01-14T01:33:25.263539940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:33:25.360169 containerd[1601]: time="2026-01-14T01:33:25.359321255Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:25.365710 containerd[1601]: time="2026-01-14T01:33:25.365340603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:33:25.365710 containerd[1601]: time="2026-01-14T01:33:25.365643838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:25.369389 kubelet[2869]: E0114 01:33:25.367575 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:33:25.369389 kubelet[2869]: E0114 01:33:25.367647 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:33:25.369389 kubelet[2869]: E0114 01:33:25.367798 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b769697d-jcx4g_calico-system(1f7ed930-9020-4e7b-a11b-c469857f7fe1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:25.369389 kubelet[2869]: E0114 01:33:25.369285 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:33:26.104227 containerd[1601]: time="2026-01-14T01:33:26.104122181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:33:26.210615 containerd[1601]: time="2026-01-14T01:33:26.210170939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:26.220172 containerd[1601]: time="2026-01-14T01:33:26.219707294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:33:26.223205 containerd[1601]: time="2026-01-14T01:33:26.222724041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:26.227030 containerd[1601]: time="2026-01-14T01:33:26.224741229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:33:26.227102 kubelet[2869]: E0114 01:33:26.223683 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:33:26.227102 kubelet[2869]: E0114 01:33:26.223739 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:33:26.227102 kubelet[2869]: E0114 01:33:26.224083 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9f5rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c46b477-wkc27_calico-apiserver(c32ecf43-33bb-4f07-8af2-75af73cd7967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:26.227102 kubelet[2869]: E0114 01:33:26.225287 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:33:26.360222 containerd[1601]: time="2026-01-14T01:33:26.358811867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:26.366840 containerd[1601]: time="2026-01-14T01:33:26.366696503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:33:26.366840 containerd[1601]: time="2026-01-14T01:33:26.366808222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:26.367571 kubelet[2869]: E0114 01:33:26.367095 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:33:26.367571 kubelet[2869]: E0114 01:33:26.367156 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:33:26.368302 kubelet[2869]: E0114 01:33:26.367789 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj6bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-546579f487-48d5w_calico-system(35648de2-563a-403b-bdd1-f0409de12a27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:26.369331 kubelet[2869]: E0114 01:33:26.369235 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:33:30.100200 containerd[1601]: time="2026-01-14T01:33:30.100148645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:33:30.182670 containerd[1601]: time="2026-01-14T01:33:30.182342057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:30.186673 containerd[1601]: time="2026-01-14T01:33:30.186395618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:33:30.186673 containerd[1601]: time="2026-01-14T01:33:30.186470736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:30.187291 kubelet[2869]: E0114 01:33:30.187241 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:33:30.187821 kubelet[2869]: E0114 01:33:30.187294 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:33:30.187821 kubelet[2869]: E0114 01:33:30.187742 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7hrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:30.195686 containerd[1601]: time="2026-01-14T01:33:30.195655547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:33:30.264314 containerd[1601]: time="2026-01-14T01:33:30.264108552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:30.268460 containerd[1601]: time="2026-01-14T01:33:30.268297496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:30.269279 containerd[1601]: time="2026-01-14T01:33:30.269180623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:33:30.271381 kubelet[2869]: E0114 01:33:30.271143 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:33:30.271999 kubelet[2869]: E0114 01:33:30.271773 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:33:30.274993 kubelet[2869]: E0114 01:33:30.273771 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x25n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5vwfg_calico-system(1821a0db-e895-49f0-8081-ae8dd6cf61e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:30.275827 kubelet[2869]: E0114 01:33:30.275411 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:33:30.278124 containerd[1601]: time="2026-01-14T01:33:30.277367618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:33:30.359788 containerd[1601]: time="2026-01-14T01:33:30.357204714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:33:30.373576 containerd[1601]: time="2026-01-14T01:33:30.373454751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:33:30.374156 containerd[1601]: time="2026-01-14T01:33:30.373865236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:33:30.375014 kubelet[2869]: E0114 01:33:30.374300 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:33:30.375014 kubelet[2869]: E0114 01:33:30.374359 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:33:30.375014 kubelet[2869]: E0114 01:33:30.374574 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7hrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:33:30.375811 kubelet[2869]: E0114 01:33:30.375771 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:33:37.112259 kubelet[2869]: E0114 01:33:37.112061 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:33:37.112259 kubelet[2869]: E0114 01:33:37.112189 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:33:40.103466 kubelet[2869]: E0114 01:33:40.103353 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:33:40.104867 kubelet[2869]: E0114 01:33:40.104259 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:33:42.114028 kubelet[2869]: E0114 01:33:42.113860 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:33:46.102241 kubelet[2869]: E0114 01:33:46.102147 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:33:48.101086 kubelet[2869]: E0114 01:33:48.100526 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:33:48.109288 kubelet[2869]: E0114 01:33:48.109191 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:33:51.106002 kubelet[2869]: E0114 01:33:51.104860 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:33:51.110220 kubelet[2869]: E0114 01:33:51.110070 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:33:52.101371 kubelet[2869]: E0114 01:33:52.100545 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:33:55.097159 kubelet[2869]: E0114 01:33:55.097010 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:33:55.100760 kubelet[2869]: E0114 01:33:55.100670 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:33:56.105591 kubelet[2869]: E0114 01:33:56.104463 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:33:58.096735 kubelet[2869]: E0114 01:33:58.096522 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:34:00.104533 kubelet[2869]: E0114 01:34:00.104424 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:34:01.118821 kubelet[2869]: E0114 01:34:01.118246 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:34:05.101093 kubelet[2869]: E0114 01:34:05.100417 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:34:05.105539 containerd[1601]: time="2026-01-14T01:34:05.105280909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:34:05.182479 containerd[1601]: time="2026-01-14T01:34:05.182360192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:34:05.184712 containerd[1601]: time="2026-01-14T01:34:05.184574573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:34:05.184811 containerd[1601]: time="2026-01-14T01:34:05.184756663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:34:05.185613 kubelet[2869]: E0114 01:34:05.185369 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:34:05.185613 kubelet[2869]: E0114 01:34:05.185556 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:34:05.186080 kubelet[2869]: E0114 01:34:05.185862 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47q8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c46b477-q67mc_calico-apiserver(ff2a83bd-ca30-4810-bc00-617909aaca25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:34:05.187404 kubelet[2869]: E0114 01:34:05.187319 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:34:06.096566 kubelet[2869]: E0114 01:34:06.096391 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:34:06.097990 kubelet[2869]: E0114 01:34:06.097634 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:34:09.098133 kubelet[2869]: E0114 01:34:09.097164 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:34:10.103369 kubelet[2869]: E0114 01:34:10.103097 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:34:11.455575 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:42042.service - OpenSSH per-connection server daemon (10.0.0.1:42042). Jan 14 01:34:11.472074 kernel: kauditd_printk_skb: 182 callbacks suppressed Jan 14 01:34:11.472404 kernel: audit: type=1130 audit(1768354451.456:726): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.15:22-10.0.0.1:42042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:11.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.15:22-10.0.0.1:42042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:11.671000 audit[5082]: USER_ACCT pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:11.678874 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:34:11.689209 kernel: audit: type=1101 audit(1768354451.671:727): pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:11.689299 sshd[5082]: Accepted publickey for core from 10.0.0.1 port 42042 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:34:11.675000 audit[5082]: CRED_ACQ pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:11.693563 systemd-logind[1583]: New session 11 of user core. Jan 14 01:34:11.711032 kernel: audit: type=1103 audit(1768354451.675:728): pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:11.711163 kernel: audit: type=1006 audit(1768354451.675:729): pid=5082 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:34:11.675000 audit[5082]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8c372f40 a2=3 a3=0 items=0 ppid=1 pid=5082 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:11.736993 kernel: audit: type=1300 audit(1768354451.675:729): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8c372f40 a2=3 a3=0 items=0 ppid=1 pid=5082 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:11.742390 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:34:11.675000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:11.755170 kernel: audit: type=1327 audit(1768354451.675:729): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:11.756000 audit[5082]: USER_START pid=5082 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:11.784999 kernel: audit: type=1105 audit(1768354451.756:730): pid=5082 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:11.762000 audit[5086]: CRED_ACQ pid=5086 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:11.815046 kernel: audit: type=1103 audit(1768354451.762:731): pid=5086 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:12.106150 containerd[1601]: time="2026-01-14T01:34:12.105436123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:34:12.153473 sshd[5086]: Connection closed by 10.0.0.1 port 42042 Jan 14 01:34:12.153526 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Jan 14 01:34:12.159000 audit[5082]: USER_END pid=5082 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:12.165105 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:42042.service: Deactivated successfully. Jan 14 01:34:12.187611 kernel: audit: type=1106 audit(1768354452.159:732): pid=5082 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:12.171168 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:34:12.178825 systemd-logind[1583]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:34:12.181379 systemd-logind[1583]: Removed session 11. Jan 14 01:34:12.159000 audit[5082]: CRED_DISP pid=5082 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:12.207059 kernel: audit: type=1104 audit(1768354452.159:733): pid=5082 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:12.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.15:22-10.0.0.1:42042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:12.224683 containerd[1601]: time="2026-01-14T01:34:12.224507206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:34:12.228160 containerd[1601]: time="2026-01-14T01:34:12.228076594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:34:12.228376 containerd[1601]: time="2026-01-14T01:34:12.228152842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:34:12.230257 kubelet[2869]: E0114 01:34:12.230035 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:34:12.230257 kubelet[2869]: E0114 01:34:12.230136 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:34:12.231550 kubelet[2869]: E0114 01:34:12.230296 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e5d51c992c994bfdbf53b4556ecb9a0e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b769697d-jcx4g_calico-system(1f7ed930-9020-4e7b-a11b-c469857f7fe1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:34:12.236079 containerd[1601]: time="2026-01-14T01:34:12.235834724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:34:12.299642 containerd[1601]: time="2026-01-14T01:34:12.299515604Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:34:12.304072 containerd[1601]: time="2026-01-14T01:34:12.304005268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:34:12.304422 containerd[1601]: time="2026-01-14T01:34:12.304297233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:34:12.305610 kubelet[2869]: E0114 01:34:12.305472 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:34:12.305705 kubelet[2869]: E0114 01:34:12.305612 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:34:12.306655 kubelet[2869]: E0114 01:34:12.305818 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b769697d-jcx4g_calico-system(1f7ed930-9020-4e7b-a11b-c469857f7fe1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:34:12.307230 kubelet[2869]: E0114 01:34:12.307192 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:34:13.105174 containerd[1601]: time="2026-01-14T01:34:13.104683283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:34:13.248599 containerd[1601]: time="2026-01-14T01:34:13.248163524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:34:13.251869 containerd[1601]: time="2026-01-14T01:34:13.251112625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:34:13.251869 containerd[1601]: time="2026-01-14T01:34:13.251230715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:34:13.252316 kubelet[2869]: E0114 01:34:13.251809 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:34:13.252316 kubelet[2869]: E0114 01:34:13.251875 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:34:13.252316 kubelet[2869]: E0114 01:34:13.252157 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7hrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:34:13.258246 containerd[1601]: time="2026-01-14T01:34:13.258133557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:34:13.350536 containerd[1601]: time="2026-01-14T01:34:13.350342526Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:34:13.368010 containerd[1601]: time="2026-01-14T01:34:13.366022266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:34:13.368010 containerd[1601]: time="2026-01-14T01:34:13.366200317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:34:13.368182 kubelet[2869]: E0114 01:34:13.367043 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:34:13.368182 kubelet[2869]: E0114 01:34:13.367508 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:34:13.368473 kubelet[2869]: E0114 01:34:13.368330 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7hrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:34:13.369992 kubelet[2869]: E0114 01:34:13.369800 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:34:17.100000 containerd[1601]: time="2026-01-14T01:34:17.099690825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:34:17.168527 containerd[1601]: time="2026-01-14T01:34:17.168395286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:34:17.170989 containerd[1601]: time="2026-01-14T01:34:17.170552801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:34:17.170989 containerd[1601]: time="2026-01-14T01:34:17.170609722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:34:17.171157 kubelet[2869]: E0114 01:34:17.170877 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:34:17.171157 kubelet[2869]: E0114 01:34:17.171043 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:34:17.172334 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:55026.service - OpenSSH per-connection server daemon (10.0.0.1:55026). Jan 14 01:34:17.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.15:22-10.0.0.1:55026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:17.174999 kubelet[2869]: E0114 01:34:17.174048 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9f5rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c46b477-wkc27_calico-apiserver(c32ecf43-33bb-4f07-8af2-75af73cd7967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:34:17.175208 kubelet[2869]: E0114 01:34:17.175170 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:34:17.177344 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:34:17.177483 kernel: audit: type=1130 audit(1768354457.171:735): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.15:22-10.0.0.1:55026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:17.283000 audit[5132]: USER_ACCT pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.298141 systemd-logind[1583]: New session 12 of user core. Jan 14 01:34:17.288047 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:34:17.301457 sshd[5132]: Accepted publickey for core from 10.0.0.1 port 55026 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:34:17.310025 kernel: audit: type=1101 audit(1768354457.283:736): pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.311573 kernel: audit: type=1103 audit(1768354457.285:737): pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.285000 audit[5132]: CRED_ACQ pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.332162 kernel: audit: type=1006 audit(1768354457.285:738): pid=5132 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 01:34:17.339320 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:34:17.285000 audit[5132]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe03e580c0 a2=3 a3=0 items=0 ppid=1 pid=5132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:17.412612 kernel: audit: type=1300 audit(1768354457.285:738): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe03e580c0 a2=3 a3=0 items=0 ppid=1 pid=5132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:17.285000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:17.349000 audit[5132]: USER_START pid=5132 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.456650 kernel: audit: type=1327 audit(1768354457.285:738): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:17.457476 kernel: audit: type=1105 audit(1768354457.349:739): pid=5132 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.359000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.478426 kernel: audit: type=1103 audit(1768354457.359:740): pid=5138 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.578299 sshd[5138]: Connection closed by 10.0.0.1 port 55026 Jan 14 01:34:17.580224 sshd-session[5132]: pam_unix(sshd:session): session closed for user core Jan 14 01:34:17.587000 audit[5132]: USER_END pid=5132 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.597863 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:55026.service: Deactivated successfully. Jan 14 01:34:17.606863 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:34:17.622148 kernel: audit: type=1106 audit(1768354457.587:741): pid=5132 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.622377 systemd-logind[1583]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:34:17.588000 audit[5132]: CRED_DISP pid=5132 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.624704 systemd-logind[1583]: Removed session 12. Jan 14 01:34:17.637115 kernel: audit: type=1104 audit(1768354457.588:742): pid=5132 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:17.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.15:22-10.0.0.1:55026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:18.099260 kubelet[2869]: E0114 01:34:18.098659 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:34:18.099543 containerd[1601]: time="2026-01-14T01:34:18.099138890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:34:18.182630 containerd[1601]: time="2026-01-14T01:34:18.182419963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:34:18.185324 containerd[1601]: time="2026-01-14T01:34:18.185221654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:34:18.185489 containerd[1601]: time="2026-01-14T01:34:18.185387895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:34:18.186008 kubelet[2869]: E0114 01:34:18.185681 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:34:18.186384 kubelet[2869]: E0114 01:34:18.185875 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:34:18.186384 kubelet[2869]: E0114 01:34:18.186300 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj6bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-546579f487-48d5w_calico-system(35648de2-563a-403b-bdd1-f0409de12a27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:34:18.188440 kubelet[2869]: E0114 01:34:18.187996 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:34:20.097020 kubelet[2869]: E0114 01:34:20.096320 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:34:21.100293 containerd[1601]: time="2026-01-14T01:34:21.100175548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:34:21.213989 containerd[1601]: time="2026-01-14T01:34:21.213706986Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:34:21.217367 containerd[1601]: time="2026-01-14T01:34:21.217240105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:34:21.217545 containerd[1601]: time="2026-01-14T01:34:21.217388992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:34:21.217929 kubelet[2869]: E0114 01:34:21.217838 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:34:21.218536 kubelet[2869]: E0114 01:34:21.218025 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:34:21.218536 kubelet[2869]: E0114 01:34:21.218228 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x25n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5vwfg_calico-system(1821a0db-e895-49f0-8081-ae8dd6cf61e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:34:21.220073 kubelet[2869]: E0114 01:34:21.219876 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:34:22.603321 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:55036.service - OpenSSH per-connection server daemon (10.0.0.1:55036). Jan 14 01:34:22.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.15:22-10.0.0.1:55036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:22.608137 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:34:22.608252 kernel: audit: type=1130 audit(1768354462.602:744): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.15:22-10.0.0.1:55036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:22.744000 audit[5160]: USER_ACCT pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:22.746215 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 55036 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:34:22.749751 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:34:22.768681 systemd-logind[1583]: New session 13 of user core. Jan 14 01:34:22.747000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:22.770359 kernel: audit: type=1101 audit(1768354462.744:745): pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:22.770421 kernel: audit: type=1103 audit(1768354462.747:746): pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:22.801152 kernel: audit: type=1006 audit(1768354462.747:747): pid=5160 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 01:34:22.747000 audit[5160]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe72c0a50 a2=3 a3=0 items=0 ppid=1 pid=5160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:22.824009 kernel: audit: type=1300 audit(1768354462.747:747): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe72c0a50 a2=3 a3=0 items=0 ppid=1 pid=5160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:22.824276 kernel: audit: type=1327 audit(1768354462.747:747): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:22.747000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:22.832456 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:34:22.838000 audit[5160]: USER_START pid=5160 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:22.843000 audit[5164]: CRED_ACQ pid=5164 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:22.896019 kernel: audit: type=1105 audit(1768354462.838:748): pid=5160 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:22.896215 kernel: audit: type=1103 audit(1768354462.843:749): pid=5164 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:23.038565 sshd[5164]: Connection closed by 10.0.0.1 port 55036 Jan 14 01:34:23.039130 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Jan 14 01:34:23.043000 audit[5160]: USER_END pid=5160 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:23.050669 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:55036.service: Deactivated successfully. Jan 14 01:34:23.059593 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:34:23.067412 systemd-logind[1583]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:34:23.070514 systemd-logind[1583]: Removed session 13. Jan 14 01:34:23.043000 audit[5160]: CRED_DISP pid=5160 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:23.091659 kernel: audit: type=1106 audit(1768354463.043:750): pid=5160 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:23.091862 kernel: audit: type=1104 audit(1768354463.043:751): pid=5160 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:23.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.15:22-10.0.0.1:55036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:27.106476 kubelet[2869]: E0114 01:34:27.106293 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:34:28.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.15:22-10.0.0.1:35272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:28.063671 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:35272.service - OpenSSH per-connection server daemon (10.0.0.1:35272). Jan 14 01:34:28.071242 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:34:28.071410 kernel: audit: type=1130 audit(1768354468.063:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.15:22-10.0.0.1:35272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:28.110293 kubelet[2869]: E0114 01:34:28.108337 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:34:28.121364 kubelet[2869]: E0114 01:34:28.121035 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:34:28.208000 audit[5192]: USER_ACCT pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.211326 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 35272 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:34:28.217639 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:34:28.235034 kernel: audit: type=1101 audit(1768354468.208:754): pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.212000 audit[5192]: CRED_ACQ pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.256037 systemd-logind[1583]: New session 14 of user core. Jan 14 01:34:28.286070 kernel: audit: type=1103 audit(1768354468.212:755): pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.286274 kernel: audit: type=1006 audit(1768354468.212:756): pid=5192 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 01:34:28.212000 audit[5192]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8feb0700 a2=3 a3=0 items=0 ppid=1 pid=5192 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:28.307870 kernel: audit: type=1300 audit(1768354468.212:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8feb0700 a2=3 a3=0 items=0 ppid=1 pid=5192 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:28.308204 kernel: audit: type=1327 audit(1768354468.212:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:28.212000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:28.319370 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:34:28.326000 audit[5192]: USER_START pid=5192 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.330000 audit[5196]: CRED_ACQ pid=5196 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.351716 kernel: audit: type=1105 audit(1768354468.326:757): pid=5192 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.351753 kernel: audit: type=1103 audit(1768354468.330:758): pid=5196 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.568281 sshd[5196]: Connection closed by 10.0.0.1 port 35272 Jan 14 01:34:28.569216 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Jan 14 01:34:28.575000 audit[5192]: USER_END pid=5192 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.581285 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:35272.service: Deactivated successfully. Jan 14 01:34:28.587113 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:34:28.595568 systemd-logind[1583]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:34:28.598189 systemd-logind[1583]: Removed session 14. Jan 14 01:34:28.575000 audit[5192]: CRED_DISP pid=5192 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.617182 kernel: audit: type=1106 audit(1768354468.575:759): pid=5192 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.617293 kernel: audit: type=1104 audit(1768354468.575:760): pid=5192 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:28.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.15:22-10.0.0.1:35272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:30.098244 kubelet[2869]: E0114 01:34:30.098111 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:34:30.099519 kubelet[2869]: E0114 01:34:30.099108 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:34:31.101084 kubelet[2869]: E0114 01:34:31.100791 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:34:33.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.15:22-10.0.0.1:35302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:33.587080 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:35302.service - OpenSSH per-connection server daemon (10.0.0.1:35302). Jan 14 01:34:33.593146 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:34:33.593302 kernel: audit: type=1130 audit(1768354473.586:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.15:22-10.0.0.1:35302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:33.767165 sshd[5213]: Accepted publickey for core from 10.0.0.1 port 35302 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:34:33.765000 audit[5213]: USER_ACCT pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:33.770543 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:34:33.783122 systemd-logind[1583]: New session 15 of user core. Jan 14 01:34:33.791995 kernel: audit: type=1101 audit(1768354473.765:763): pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:33.768000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:33.823108 kernel: audit: type=1103 audit(1768354473.768:764): pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:33.823258 kernel: audit: type=1006 audit(1768354473.768:765): pid=5213 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 01:34:33.823304 kernel: audit: type=1300 audit(1768354473.768:765): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7ad896f0 a2=3 a3=0 items=0 ppid=1 pid=5213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:33.768000 audit[5213]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7ad896f0 a2=3 a3=0 items=0 ppid=1 pid=5213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:33.768000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:33.845599 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:34:33.855024 kernel: audit: type=1327 audit(1768354473.768:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:33.855000 audit[5213]: USER_START pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:33.887083 kernel: audit: type=1105 audit(1768354473.855:766): pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:33.887320 kernel: audit: type=1103 audit(1768354473.859:767): pid=5217 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:33.859000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:34.098423 kubelet[2869]: E0114 01:34:34.098027 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:34:34.122989 sshd[5217]: Connection closed by 10.0.0.1 port 35302 Jan 14 01:34:34.126503 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Jan 14 01:34:34.136000 audit[5213]: USER_END pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:34.166098 kernel: audit: type=1106 audit(1768354474.136:768): pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:34.165000 audit[5213]: CRED_DISP pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:34.170249 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:35302.service: Deactivated successfully. Jan 14 01:34:34.182349 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:34:34.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.15:22-10.0.0.1:35302 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:34.188082 kernel: audit: type=1104 audit(1768354474.165:769): pid=5213 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:34.192791 systemd-logind[1583]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:34:34.195434 systemd-logind[1583]: Removed session 15. Jan 14 01:34:39.106056 kubelet[2869]: E0114 01:34:39.105622 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:34:39.146431 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:54460.service - OpenSSH per-connection server daemon (10.0.0.1:54460). Jan 14 01:34:39.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.15:22-10.0.0.1:54460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:39.174042 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:34:39.174274 kernel: audit: type=1130 audit(1768354479.145:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.15:22-10.0.0.1:54460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:39.273000 audit[5233]: USER_ACCT pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:39.275066 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 54460 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:34:39.278311 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:34:39.288807 systemd-logind[1583]: New session 16 of user core. Jan 14 01:34:39.275000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:39.315257 kernel: audit: type=1101 audit(1768354479.273:772): pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:39.315375 kernel: audit: type=1103 audit(1768354479.275:773): pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:39.315408 kernel: audit: type=1006 audit(1768354479.275:774): pid=5233 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 01:34:39.317329 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:34:39.275000 audit[5233]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc74500bd0 a2=3 a3=0 items=0 ppid=1 pid=5233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:39.350794 kernel: audit: type=1300 audit(1768354479.275:774): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc74500bd0 a2=3 a3=0 items=0 ppid=1 pid=5233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:39.275000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:39.323000 audit[5233]: USER_START pid=5233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:39.390666 kernel: audit: type=1327 audit(1768354479.275:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:39.392078 kernel: audit: type=1105 audit(1768354479.323:775): pid=5233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:39.392225 kernel: audit: type=1103 audit(1768354479.327:776): pid=5237 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:39.327000 audit[5237]: CRED_ACQ pid=5237 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:39.494056 sshd[5237]: Connection closed by 10.0.0.1 port 54460 Jan 14 01:34:39.495329 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Jan 14 01:34:39.507000 audit[5233]: USER_END pid=5233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:39.513473 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:54460.service: Deactivated successfully. Jan 14 01:34:39.520133 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:34:39.528650 systemd-logind[1583]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:34:39.531382 systemd-logind[1583]: Removed session 16. Jan 14 01:34:39.538139 kernel: audit: type=1106 audit(1768354479.507:777): pid=5233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:39.507000 audit[5233]: CRED_DISP pid=5233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:39.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.15:22-10.0.0.1:54460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:39.560134 kernel: audit: type=1104 audit(1768354479.507:778): pid=5233 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:41.105818 kubelet[2869]: E0114 01:34:41.105744 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:34:42.101535 kubelet[2869]: E0114 01:34:42.101473 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:34:42.103187 kubelet[2869]: E0114 01:34:42.101655 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:34:42.103187 kubelet[2869]: E0114 01:34:42.102542 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:34:44.517540 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:39022.service - OpenSSH per-connection server daemon (10.0.0.1:39022). Jan 14 01:34:44.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.15:22-10.0.0.1:39022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:44.521589 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:34:44.521670 kernel: audit: type=1130 audit(1768354484.516:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.15:22-10.0.0.1:39022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:44.640000 audit[5280]: USER_ACCT pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.659294 kernel: audit: type=1101 audit(1768354484.640:781): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.663297 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 39022 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:34:44.667372 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:34:44.664000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.675691 systemd-logind[1583]: New session 17 of user core. Jan 14 01:34:44.693543 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:34:44.699481 kernel: audit: type=1103 audit(1768354484.664:782): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.699573 kernel: audit: type=1006 audit(1768354484.664:783): pid=5280 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 01:34:44.699616 kernel: audit: type=1300 audit(1768354484.664:783): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff13e8bd40 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:44.664000 audit[5280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff13e8bd40 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:44.717181 kernel: audit: type=1327 audit(1768354484.664:783): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:44.664000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:44.705000 audit[5280]: USER_START pid=5280 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.749267 kernel: audit: type=1105 audit(1768354484.705:784): pid=5280 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.765771 kernel: audit: type=1103 audit(1768354484.710:785): pid=5284 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.710000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.903301 sshd[5284]: Connection closed by 10.0.0.1 port 39022 Jan 14 01:34:44.903701 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jan 14 01:34:44.905000 audit[5280]: USER_END pid=5280 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.910840 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:39022.service: Deactivated successfully. Jan 14 01:34:44.915623 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:34:44.919853 systemd-logind[1583]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:34:44.922657 systemd-logind[1583]: Removed session 17. Jan 14 01:34:44.932067 kernel: audit: type=1106 audit(1768354484.905:786): pid=5280 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.932150 kernel: audit: type=1104 audit(1768354484.905:787): pid=5280 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.905000 audit[5280]: CRED_DISP pid=5280 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:44.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.15:22-10.0.0.1:39022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:49.099730 kubelet[2869]: E0114 01:34:49.099223 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:34:49.921478 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:39032.service - OpenSSH per-connection server daemon (10.0.0.1:39032). Jan 14 01:34:49.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.15:22-10.0.0.1:39032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:49.926209 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:34:49.926290 kernel: audit: type=1130 audit(1768354489.920:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.15:22-10.0.0.1:39032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:50.053000 audit[5299]: USER_ACCT pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.058156 sshd[5299]: Accepted publickey for core from 10.0.0.1 port 39032 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:34:50.066271 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:34:50.082093 kernel: audit: type=1101 audit(1768354490.053:790): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.060000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.103441 systemd-logind[1583]: New session 18 of user core. Jan 14 01:34:50.122007 kernel: audit: type=1103 audit(1768354490.060:791): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.127405 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:34:50.060000 audit[5299]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea9e7bef0 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:50.192065 kernel: audit: type=1006 audit(1768354490.060:792): pid=5299 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 01:34:50.192262 kernel: audit: type=1300 audit(1768354490.060:792): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea9e7bef0 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:50.192305 kernel: audit: type=1327 audit(1768354490.060:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:50.060000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:50.158000 audit[5299]: USER_START pid=5299 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.226120 kernel: audit: type=1105 audit(1768354490.158:793): pid=5299 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.226312 kernel: audit: type=1103 audit(1768354490.172:794): pid=5310 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.172000 audit[5310]: CRED_ACQ pid=5310 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.357475 sshd[5310]: Connection closed by 10.0.0.1 port 39032 Jan 14 01:34:50.358355 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Jan 14 01:34:50.361000 audit[5299]: USER_END pid=5299 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.370759 systemd-logind[1583]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:34:50.371606 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:39032.service: Deactivated successfully. Jan 14 01:34:50.377535 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:34:50.381427 systemd-logind[1583]: Removed session 18. Jan 14 01:34:50.362000 audit[5299]: CRED_DISP pid=5299 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.406246 kernel: audit: type=1106 audit(1768354490.361:795): pid=5299 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.406367 kernel: audit: type=1104 audit(1768354490.362:796): pid=5299 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:50.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.15:22-10.0.0.1:39032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:52.101872 kubelet[2869]: E0114 01:34:52.101240 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:34:55.103400 kubelet[2869]: E0114 01:34:55.103151 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:34:55.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.15:22-10.0.0.1:45632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:55.384319 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:34:55.385695 kernel: audit: type=1130 audit(1768354495.378:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.15:22-10.0.0.1:45632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:55.376144 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:45632.service - OpenSSH per-connection server daemon (10.0.0.1:45632). Jan 14 01:34:55.521000 audit[5326]: USER_ACCT pid=5326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.524300 sshd[5326]: Accepted publickey for core from 10.0.0.1 port 45632 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:34:55.527309 sshd-session[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:34:55.546095 kernel: audit: type=1101 audit(1768354495.521:799): pid=5326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.546220 kernel: audit: type=1103 audit(1768354495.524:800): pid=5326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.524000 audit[5326]: CRED_ACQ pid=5326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.540536 systemd-logind[1583]: New session 19 of user core. Jan 14 01:34:55.587045 kernel: audit: type=1006 audit(1768354495.524:801): pid=5326 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 01:34:55.587207 kernel: audit: type=1300 audit(1768354495.524:801): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecb73da90 a2=3 a3=0 items=0 ppid=1 pid=5326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:55.524000 audit[5326]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecb73da90 a2=3 a3=0 items=0 ppid=1 pid=5326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:55.524000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:55.596056 kernel: audit: type=1327 audit(1768354495.524:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:55.597322 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:34:55.603000 audit[5326]: USER_START pid=5326 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.627284 kernel: audit: type=1105 audit(1768354495.603:802): pid=5326 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.626000 audit[5330]: CRED_ACQ pid=5330 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.647097 kernel: audit: type=1103 audit(1768354495.626:803): pid=5330 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.825553 sshd[5330]: Connection closed by 10.0.0.1 port 45632 Jan 14 01:34:55.828000 audit[5326]: USER_END pid=5326 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.826345 sshd-session[5326]: pam_unix(sshd:session): session closed for user core Jan 14 01:34:55.828000 audit[5326]: CRED_DISP pid=5326 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.853319 kernel: audit: type=1106 audit(1768354495.828:804): pid=5326 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.853559 kernel: audit: type=1104 audit(1768354495.828:805): pid=5326 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.880428 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:45632.service: Deactivated successfully. Jan 14 01:34:55.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.15:22-10.0.0.1:45632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:55.890826 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:34:55.893345 systemd-logind[1583]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:34:55.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.15:22-10.0.0.1:45648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:55.897446 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:45648.service - OpenSSH per-connection server daemon (10.0.0.1:45648). Jan 14 01:34:55.900523 systemd-logind[1583]: Removed session 19. Jan 14 01:34:55.993000 audit[5345]: USER_ACCT pid=5345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.996050 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 45648 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:34:55.995000 audit[5345]: CRED_ACQ pid=5345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:55.996000 audit[5345]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8cb349b0 a2=3 a3=0 items=0 ppid=1 pid=5345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:55.996000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:56.000611 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:34:56.018274 systemd-logind[1583]: New session 20 of user core. Jan 14 01:34:56.026740 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:34:56.034000 audit[5345]: USER_START pid=5345 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:56.038000 audit[5350]: CRED_ACQ pid=5350 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:56.107318 kubelet[2869]: E0114 01:34:56.107118 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:34:56.299458 sshd[5350]: Connection closed by 10.0.0.1 port 45648 Jan 14 01:34:56.300249 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Jan 14 01:34:56.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.15:22-10.0.0.1:45662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:56.322457 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:45662.service - OpenSSH per-connection server daemon (10.0.0.1:45662). Jan 14 01:34:56.323000 audit[5345]: USER_END pid=5345 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:56.323000 audit[5345]: CRED_DISP pid=5345 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:56.334548 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:45648.service: Deactivated successfully. Jan 14 01:34:56.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.15:22-10.0.0.1:45648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:56.341478 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:34:56.350199 systemd-logind[1583]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:34:56.360103 systemd-logind[1583]: Removed session 20. Jan 14 01:34:56.493000 audit[5358]: USER_ACCT pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:56.495265 sshd[5358]: Accepted publickey for core from 10.0.0.1 port 45662 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:34:56.504050 sshd-session[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:34:56.501000 audit[5358]: CRED_ACQ pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:56.501000 audit[5358]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4b3f8000 a2=3 a3=0 items=0 ppid=1 pid=5358 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:34:56.501000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:34:56.522692 systemd-logind[1583]: New session 21 of user core. Jan 14 01:34:56.531589 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:34:56.542000 audit[5358]: USER_START pid=5358 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:56.546000 audit[5365]: CRED_ACQ pid=5365 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:56.714855 sshd[5365]: Connection closed by 10.0.0.1 port 45662 Jan 14 01:34:56.715344 sshd-session[5358]: pam_unix(sshd:session): session closed for user core Jan 14 01:34:56.717000 audit[5358]: USER_END pid=5358 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:56.717000 audit[5358]: CRED_DISP pid=5358 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:34:56.723649 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:45662.service: Deactivated successfully. Jan 14 01:34:56.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.15:22-10.0.0.1:45662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:34:56.727681 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:34:56.732055 systemd-logind[1583]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:34:56.735411 systemd-logind[1583]: Removed session 21. Jan 14 01:34:57.103476 kubelet[2869]: E0114 01:34:57.103386 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:34:57.106451 kubelet[2869]: E0114 01:34:57.106332 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:35:01.095456 systemd[1703]: Created slice background.slice - User Background Tasks Slice. Jan 14 01:35:01.105732 kubelet[2869]: E0114 01:35:01.102883 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:35:01.107092 systemd[1703]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 14 01:35:01.150625 systemd[1703]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 14 01:35:01.737572 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:45702.service - OpenSSH per-connection server daemon (10.0.0.1:45702). Jan 14 01:35:01.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.15:22-10.0.0.1:45702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:01.762645 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 01:35:01.762780 kernel: audit: type=1130 audit(1768354501.737:825): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.15:22-10.0.0.1:45702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:01.848000 audit[5382]: USER_ACCT pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:01.873251 kernel: audit: type=1101 audit(1768354501.848:826): pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:01.873302 sshd[5382]: Accepted publickey for core from 10.0.0.1 port 45702 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:01.853473 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:01.869634 systemd-logind[1583]: New session 22 of user core. Jan 14 01:35:01.850000 audit[5382]: CRED_ACQ pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:01.896159 kernel: audit: type=1103 audit(1768354501.850:827): pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:01.897610 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:35:01.912101 kernel: audit: type=1006 audit(1768354501.851:828): pid=5382 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:35:01.851000 audit[5382]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff29e4d5f0 a2=3 a3=0 items=0 ppid=1 pid=5382 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:01.851000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:01.948553 kernel: audit: type=1300 audit(1768354501.851:828): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff29e4d5f0 a2=3 a3=0 items=0 ppid=1 pid=5382 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:01.950327 kernel: audit: type=1327 audit(1768354501.851:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:01.950389 kernel: audit: type=1105 audit(1768354501.915:829): pid=5382 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:01.915000 audit[5382]: USER_START pid=5382 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:01.919000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:01.986611 kernel: audit: type=1103 audit(1768354501.919:830): pid=5386 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:02.103382 kubelet[2869]: E0114 01:35:02.103127 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:35:02.114112 sshd[5386]: Connection closed by 10.0.0.1 port 45702 Jan 14 01:35:02.114363 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:02.117000 audit[5382]: USER_END pid=5382 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:02.122567 systemd-logind[1583]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:35:02.122867 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:45702.service: Deactivated successfully. Jan 14 01:35:02.126815 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:35:02.132514 systemd-logind[1583]: Removed session 22. Jan 14 01:35:02.118000 audit[5382]: CRED_DISP pid=5382 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:02.164002 kernel: audit: type=1106 audit(1768354502.117:831): pid=5382 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:02.164194 kernel: audit: type=1104 audit(1768354502.118:832): pid=5382 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:02.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.15:22-10.0.0.1:45702 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:05.099648 kubelet[2869]: E0114 01:35:05.099449 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:35:06.102649 kubelet[2869]: E0114 01:35:06.102405 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:35:07.138421 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:55586.service - OpenSSH per-connection server daemon (10.0.0.1:55586). Jan 14 01:35:07.148876 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:35:07.149173 kernel: audit: type=1130 audit(1768354507.137:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.15:22-10.0.0.1:55586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:07.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.15:22-10.0.0.1:55586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:07.238000 audit[5400]: USER_ACCT pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.243721 sshd-session[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:07.258723 sshd[5400]: Accepted publickey for core from 10.0.0.1 port 55586 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:07.241000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.262059 systemd-logind[1583]: New session 23 of user core. Jan 14 01:35:07.282254 kernel: audit: type=1101 audit(1768354507.238:835): pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.282422 kernel: audit: type=1103 audit(1768354507.241:836): pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.282600 kernel: audit: type=1006 audit(1768354507.241:837): pid=5400 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 01:35:07.241000 audit[5400]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5ccb2c30 a2=3 a3=0 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:07.318005 kernel: audit: type=1300 audit(1768354507.241:837): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5ccb2c30 a2=3 a3=0 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:07.241000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:07.327067 kernel: audit: type=1327 audit(1768354507.241:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:07.329273 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:35:07.338000 audit[5400]: USER_START pid=5400 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.342000 audit[5404]: CRED_ACQ pid=5404 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.385846 kernel: audit: type=1105 audit(1768354507.338:838): pid=5400 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.386436 kernel: audit: type=1103 audit(1768354507.342:839): pid=5404 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.514491 sshd[5404]: Connection closed by 10.0.0.1 port 55586 Jan 14 01:35:07.514845 sshd-session[5400]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:07.515000 audit[5400]: USER_END pid=5400 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.522810 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:55586.service: Deactivated successfully. Jan 14 01:35:07.526810 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:35:07.532859 systemd-logind[1583]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:35:07.535736 systemd-logind[1583]: Removed session 23. Jan 14 01:35:07.516000 audit[5400]: CRED_DISP pid=5400 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.563052 kernel: audit: type=1106 audit(1768354507.515:840): pid=5400 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.563204 kernel: audit: type=1104 audit(1768354507.516:841): pid=5400 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:07.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.15:22-10.0.0.1:55586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:09.107368 kubelet[2869]: E0114 01:35:09.106760 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:35:10.097810 kubelet[2869]: E0114 01:35:10.097627 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:35:11.105626 kubelet[2869]: E0114 01:35:11.105472 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:35:12.098071 kubelet[2869]: E0114 01:35:12.097793 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:35:12.099431 kubelet[2869]: E0114 01:35:12.098049 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:35:12.541072 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:55604.service - OpenSSH per-connection server daemon (10.0.0.1:55604). Jan 14 01:35:12.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.15:22-10.0.0.1:55604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:12.546267 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:35:12.548479 kernel: audit: type=1130 audit(1768354512.540:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.15:22-10.0.0.1:55604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:12.647000 audit[5417]: USER_ACCT pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.649057 sshd[5417]: Accepted publickey for core from 10.0.0.1 port 55604 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:12.652395 sshd-session[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:12.666047 systemd-logind[1583]: New session 24 of user core. Jan 14 01:35:12.671048 kernel: audit: type=1101 audit(1768354512.647:844): pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.671188 kernel: audit: type=1103 audit(1768354512.649:845): pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.649000 audit[5417]: CRED_ACQ pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.699344 kernel: audit: type=1006 audit(1768354512.649:846): pid=5417 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 01:35:12.699497 kernel: audit: type=1300 audit(1768354512.649:846): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4a74bc90 a2=3 a3=0 items=0 ppid=1 pid=5417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:12.649000 audit[5417]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4a74bc90 a2=3 a3=0 items=0 ppid=1 pid=5417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:12.700627 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:35:12.649000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:12.731572 kernel: audit: type=1327 audit(1768354512.649:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:12.732466 kernel: audit: type=1105 audit(1768354512.711:847): pid=5417 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.711000 audit[5417]: USER_START pid=5417 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.715000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.771100 kernel: audit: type=1103 audit(1768354512.715:848): pid=5421 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.923876 sshd[5421]: Connection closed by 10.0.0.1 port 55604 Jan 14 01:35:12.924603 sshd-session[5417]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:12.926000 audit[5417]: USER_END pid=5417 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.936786 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:55604.service: Deactivated successfully. Jan 14 01:35:12.938323 systemd-logind[1583]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:35:12.947699 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:35:12.927000 audit[5417]: CRED_DISP pid=5417 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.966227 systemd-logind[1583]: Removed session 24. Jan 14 01:35:12.973492 kernel: audit: type=1106 audit(1768354512.926:849): pid=5417 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.973570 kernel: audit: type=1104 audit(1768354512.927:850): pid=5417 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:12.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.15:22-10.0.0.1:55604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:14.101108 kubelet[2869]: E0114 01:35:14.098493 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:35:17.103234 kubelet[2869]: E0114 01:35:17.101499 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:35:17.945819 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:45518.service - OpenSSH per-connection server daemon (10.0.0.1:45518). Jan 14 01:35:17.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.15:22-10.0.0.1:45518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:17.953087 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:35:17.953519 kernel: audit: type=1130 audit(1768354517.944:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.15:22-10.0.0.1:45518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:18.088000 audit[5466]: USER_ACCT pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.093357 sshd[5466]: Accepted publickey for core from 10.0.0.1 port 45518 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:18.095811 kubelet[2869]: E0114 01:35:18.095710 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:35:18.098420 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:18.107644 kubelet[2869]: E0114 01:35:18.107501 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:35:18.093000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.120560 systemd-logind[1583]: New session 25 of user core. Jan 14 01:35:18.137020 kernel: audit: type=1101 audit(1768354518.088:853): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.137107 kernel: audit: type=1103 audit(1768354518.093:854): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.149457 kernel: audit: type=1006 audit(1768354518.093:855): pid=5466 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 01:35:18.149859 kernel: audit: type=1300 audit(1768354518.093:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd77c827f0 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:18.093000 audit[5466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd77c827f0 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:18.093000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:18.181383 kernel: audit: type=1327 audit(1768354518.093:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:18.189443 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:35:18.200000 audit[5466]: USER_START pid=5466 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.216000 audit[5470]: CRED_ACQ pid=5470 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.243077 kernel: audit: type=1105 audit(1768354518.200:856): pid=5466 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.243268 kernel: audit: type=1103 audit(1768354518.216:857): pid=5470 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.383660 sshd[5470]: Connection closed by 10.0.0.1 port 45518 Jan 14 01:35:18.384154 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:18.385000 audit[5466]: USER_END pid=5466 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.389103 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:45518.service: Deactivated successfully. Jan 14 01:35:18.391829 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:35:18.396635 systemd-logind[1583]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:35:18.401004 systemd-logind[1583]: Removed session 25. Jan 14 01:35:18.385000 audit[5466]: CRED_DISP pid=5466 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.423041 kernel: audit: type=1106 audit(1768354518.385:858): pid=5466 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.423157 kernel: audit: type=1104 audit(1768354518.385:859): pid=5466 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:18.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.15:22-10.0.0.1:45518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:19.097419 kubelet[2869]: E0114 01:35:19.097267 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:35:20.100834 kubelet[2869]: E0114 01:35:20.100358 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:35:22.095794 kubelet[2869]: E0114 01:35:22.095639 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:35:23.103848 kubelet[2869]: E0114 01:35:23.103672 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:35:23.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.15:22-10.0.0.1:45524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:23.409083 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:45524.service - OpenSSH per-connection server daemon (10.0.0.1:45524). Jan 14 01:35:23.434066 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:35:23.434230 kernel: audit: type=1130 audit(1768354523.409:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.15:22-10.0.0.1:45524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:23.532000 audit[5490]: USER_ACCT pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.538297 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:23.543854 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 45524 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:23.549472 systemd-logind[1583]: New session 26 of user core. Jan 14 01:35:23.535000 audit[5490]: CRED_ACQ pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.581997 kernel: audit: type=1101 audit(1768354523.532:862): pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.582194 kernel: audit: type=1103 audit(1768354523.535:863): pid=5490 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.582241 kernel: audit: type=1006 audit(1768354523.535:864): pid=5490 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 01:35:23.594488 kernel: audit: type=1300 audit(1768354523.535:864): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb5677180 a2=3 a3=0 items=0 ppid=1 pid=5490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:23.535000 audit[5490]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb5677180 a2=3 a3=0 items=0 ppid=1 pid=5490 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:23.596284 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 01:35:23.535000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:23.626608 kernel: audit: type=1327 audit(1768354523.535:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:23.609000 audit[5490]: USER_START pid=5490 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.650016 kernel: audit: type=1105 audit(1768354523.609:865): pid=5490 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.650145 kernel: audit: type=1103 audit(1768354523.616:866): pid=5494 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.616000 audit[5494]: CRED_ACQ pid=5494 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.860330 sshd[5494]: Connection closed by 10.0.0.1 port 45524 Jan 14 01:35:23.860749 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:23.862000 audit[5490]: USER_END pid=5490 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.868305 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:45524.service: Deactivated successfully. Jan 14 01:35:23.878027 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 01:35:23.884774 systemd-logind[1583]: Session 26 logged out. Waiting for processes to exit. Jan 14 01:35:23.887117 systemd-logind[1583]: Removed session 26. Jan 14 01:35:23.862000 audit[5490]: CRED_DISP pid=5490 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.906207 kernel: audit: type=1106 audit(1768354523.862:867): pid=5490 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.906361 kernel: audit: type=1104 audit(1768354523.862:868): pid=5490 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:23.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.15:22-10.0.0.1:45524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:26.101811 containerd[1601]: time="2026-01-14T01:35:26.101608504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:35:26.103601 kubelet[2869]: E0114 01:35:26.102138 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:35:26.196288 containerd[1601]: time="2026-01-14T01:35:26.196074414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:35:26.197813 containerd[1601]: time="2026-01-14T01:35:26.197617543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:35:26.197813 containerd[1601]: time="2026-01-14T01:35:26.197719233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:35:26.198118 kubelet[2869]: E0114 01:35:26.198047 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:35:26.198118 kubelet[2869]: E0114 01:35:26.198105 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:35:26.198605 kubelet[2869]: E0114 01:35:26.198548 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47q8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c46b477-q67mc_calico-apiserver(ff2a83bd-ca30-4810-bc00-617909aaca25): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:35:26.199987 kubelet[2869]: E0114 01:35:26.199867 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:35:27.097184 kubelet[2869]: E0114 01:35:27.097033 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:35:27.299768 update_engine[1586]: I20260114 01:35:27.299235 1586 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 01:35:27.299768 update_engine[1586]: I20260114 01:35:27.299371 1586 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 01:35:27.306957 update_engine[1586]: I20260114 01:35:27.304244 1586 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 01:35:27.307692 update_engine[1586]: I20260114 01:35:27.307669 1586 omaha_request_params.cc:62] Current group set to developer Jan 14 01:35:27.308103 update_engine[1586]: I20260114 01:35:27.308080 1586 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 01:35:27.309955 update_engine[1586]: I20260114 01:35:27.308203 1586 update_attempter.cc:643] Scheduling an action processor start. Jan 14 01:35:27.309955 update_engine[1586]: I20260114 01:35:27.308256 1586 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 01:35:27.309955 update_engine[1586]: I20260114 01:35:27.308462 1586 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 01:35:27.309955 update_engine[1586]: I20260114 01:35:27.308606 1586 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 01:35:27.309955 update_engine[1586]: I20260114 01:35:27.308619 1586 omaha_request_action.cc:272] Request: Jan 14 01:35:27.309955 update_engine[1586]: Jan 14 01:35:27.309955 update_engine[1586]: Jan 14 01:35:27.309955 update_engine[1586]: Jan 14 01:35:27.309955 update_engine[1586]: Jan 14 01:35:27.309955 update_engine[1586]: Jan 14 01:35:27.309955 update_engine[1586]: Jan 14 01:35:27.309955 update_engine[1586]: Jan 14 01:35:27.309955 update_engine[1586]: Jan 14 01:35:27.309955 update_engine[1586]: I20260114 01:35:27.308645 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:35:27.318645 update_engine[1586]: I20260114 01:35:27.317306 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:35:27.320793 update_engine[1586]: I20260114 01:35:27.319621 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:35:27.328157 locksmithd[1635]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 01:35:27.337251 update_engine[1586]: E20260114 01:35:27.336823 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 01:35:27.337251 update_engine[1586]: I20260114 01:35:27.337085 1586 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 01:35:28.880753 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:37772.service - OpenSSH per-connection server daemon (10.0.0.1:37772). Jan 14 01:35:28.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.15:22-10.0.0.1:37772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:28.883264 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:35:28.883337 kernel: audit: type=1130 audit(1768354528.880:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.15:22-10.0.0.1:37772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:28.956000 audit[5514]: USER_ACCT pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:28.960661 sshd-session[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:28.964376 sshd[5514]: Accepted publickey for core from 10.0.0.1 port 37772 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:28.967731 systemd-logind[1583]: New session 27 of user core. Jan 14 01:35:28.958000 audit[5514]: CRED_ACQ pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:28.981061 kernel: audit: type=1101 audit(1768354528.956:871): pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:28.981192 kernel: audit: type=1103 audit(1768354528.958:872): pid=5514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:28.981236 kernel: audit: type=1006 audit(1768354528.958:873): pid=5514 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 01:35:28.958000 audit[5514]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe23469350 a2=3 a3=0 items=0 ppid=1 pid=5514 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:29.006987 kernel: audit: type=1300 audit(1768354528.958:873): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe23469350 a2=3 a3=0 items=0 ppid=1 pid=5514 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:29.007150 kernel: audit: type=1327 audit(1768354528.958:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:28.958000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:29.015366 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 01:35:29.019000 audit[5514]: USER_START pid=5514 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:29.036148 kernel: audit: type=1105 audit(1768354529.019:874): pid=5514 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:29.036230 kernel: audit: type=1103 audit(1768354529.022:875): pid=5520 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:29.022000 audit[5520]: CRED_ACQ pid=5520 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:29.101646 kubelet[2869]: E0114 01:35:29.101583 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:35:29.148755 sshd[5520]: Connection closed by 10.0.0.1 port 37772 Jan 14 01:35:29.149070 sshd-session[5514]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:29.150000 audit[5514]: USER_END pid=5514 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:29.154878 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:37772.service: Deactivated successfully. Jan 14 01:35:29.159342 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 01:35:29.162803 systemd-logind[1583]: Session 27 logged out. Waiting for processes to exit. Jan 14 01:35:29.150000 audit[5514]: CRED_DISP pid=5514 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:29.165227 systemd-logind[1583]: Removed session 27. Jan 14 01:35:29.172110 kernel: audit: type=1106 audit(1768354529.150:876): pid=5514 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:29.172601 kernel: audit: type=1104 audit(1768354529.150:877): pid=5514 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:29.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.15:22-10.0.0.1:37772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:30.096963 kubelet[2869]: E0114 01:35:30.096786 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:35:34.098666 containerd[1601]: time="2026-01-14T01:35:34.098473842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:35:34.166293 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:37790.service - OpenSSH per-connection server daemon (10.0.0.1:37790). Jan 14 01:35:34.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.15:22-10.0.0.1:37790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:34.168584 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:35:34.168636 kernel: audit: type=1130 audit(1768354534.165:879): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.15:22-10.0.0.1:37790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:34.176524 containerd[1601]: time="2026-01-14T01:35:34.176355269Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:35:34.181321 containerd[1601]: time="2026-01-14T01:35:34.178467599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:35:34.181321 containerd[1601]: time="2026-01-14T01:35:34.181056749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:35:34.183083 kubelet[2869]: E0114 01:35:34.183048 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:35:34.183962 kubelet[2869]: E0114 01:35:34.183522 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:35:34.183962 kubelet[2869]: E0114 01:35:34.183790 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e5d51c992c994bfdbf53b4556ecb9a0e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b769697d-jcx4g_calico-system(1f7ed930-9020-4e7b-a11b-c469857f7fe1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:35:34.189216 containerd[1601]: time="2026-01-14T01:35:34.189113639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:35:34.261822 containerd[1601]: time="2026-01-14T01:35:34.261757425Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:35:34.264143 containerd[1601]: time="2026-01-14T01:35:34.264010598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:35:34.264143 containerd[1601]: time="2026-01-14T01:35:34.264117938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:35:34.264779 kubelet[2869]: E0114 01:35:34.264525 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:35:34.265050 kubelet[2869]: E0114 01:35:34.264982 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:35:34.265779 kubelet[2869]: E0114 01:35:34.265287 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9cx6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-b769697d-jcx4g_calico-system(1f7ed930-9020-4e7b-a11b-c469857f7fe1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:35:34.267246 kubelet[2869]: E0114 01:35:34.267142 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:35:34.266000 audit[5535]: USER_ACCT pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.272414 sshd[5535]: Accepted publickey for core from 10.0.0.1 port 37790 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:34.276126 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:34.282976 kernel: audit: type=1101 audit(1768354534.266:880): pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.273000 audit[5535]: CRED_ACQ pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.283352 systemd-logind[1583]: New session 28 of user core. Jan 14 01:35:34.301457 kernel: audit: type=1103 audit(1768354534.273:881): pid=5535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.301592 kernel: audit: type=1006 audit(1768354534.273:882): pid=5535 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 01:35:34.273000 audit[5535]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8f08aa10 a2=3 a3=0 items=0 ppid=1 pid=5535 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:34.314308 kernel: audit: type=1300 audit(1768354534.273:882): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe8f08aa10 a2=3 a3=0 items=0 ppid=1 pid=5535 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:34.273000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:34.315408 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 01:35:34.319219 kernel: audit: type=1327 audit(1768354534.273:882): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:34.320000 audit[5535]: USER_START pid=5535 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.324000 audit[5539]: CRED_ACQ pid=5539 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.342510 kernel: audit: type=1105 audit(1768354534.320:883): pid=5535 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.342625 kernel: audit: type=1103 audit(1768354534.324:884): pid=5539 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.482118 sshd[5539]: Connection closed by 10.0.0.1 port 37790 Jan 14 01:35:34.482804 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:34.485000 audit[5535]: USER_END pid=5535 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.485000 audit[5535]: CRED_DISP pid=5535 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.506155 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:37790.service: Deactivated successfully. Jan 14 01:35:34.507235 kernel: audit: type=1106 audit(1768354534.485:885): pid=5535 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.509059 kernel: audit: type=1104 audit(1768354534.485:886): pid=5535 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.15:22-10.0.0.1:37790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:34.512334 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 01:35:34.515469 systemd-logind[1583]: Session 28 logged out. Waiting for processes to exit. Jan 14 01:35:34.520455 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:38460.service - OpenSSH per-connection server daemon (10.0.0.1:38460). Jan 14 01:35:34.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.15:22-10.0.0.1:38460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:34.521974 systemd-logind[1583]: Removed session 28. Jan 14 01:35:34.588000 audit[5553]: USER_ACCT pid=5553 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.589705 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 38460 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:34.590000 audit[5553]: CRED_ACQ pid=5553 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.590000 audit[5553]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6c786a80 a2=3 a3=0 items=0 ppid=1 pid=5553 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:34.590000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:34.594027 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:34.605801 systemd-logind[1583]: New session 29 of user core. Jan 14 01:35:34.614886 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 01:35:34.617000 audit[5553]: USER_START pid=5553 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:34.621000 audit[5557]: CRED_ACQ pid=5557 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:35.072495 sshd[5557]: Connection closed by 10.0.0.1 port 38460 Jan 14 01:35:35.073840 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:35.075000 audit[5553]: USER_END pid=5553 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:35.075000 audit[5553]: CRED_DISP pid=5553 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:35.086083 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:38460.service: Deactivated successfully. Jan 14 01:35:35.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.15:22-10.0.0.1:38460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:35.090670 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 01:35:35.094983 systemd-logind[1583]: Session 29 logged out. Waiting for processes to exit. Jan 14 01:35:35.101651 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:38468.service - OpenSSH per-connection server daemon (10.0.0.1:38468). Jan 14 01:35:35.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.15:22-10.0.0.1:38468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:35.103993 systemd-logind[1583]: Removed session 29. Jan 14 01:35:35.212000 audit[5570]: USER_ACCT pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:35.213987 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 38468 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:35.214000 audit[5570]: CRED_ACQ pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:35.216879 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:35.214000 audit[5570]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddee14b80 a2=3 a3=0 items=0 ppid=1 pid=5570 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:35.214000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:35.225139 systemd-logind[1583]: New session 30 of user core. Jan 14 01:35:35.244408 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 01:35:35.249000 audit[5570]: USER_START pid=5570 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:35.252000 audit[5574]: CRED_ACQ pid=5574 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.086658 sshd[5574]: Connection closed by 10.0.0.1 port 38468 Jan 14 01:35:36.090963 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:36.093000 audit[5590]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=5590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:35:36.093000 audit[5590]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcdb7608e0 a2=0 a3=7ffcdb7608cc items=0 ppid=3030 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:36.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:35:36.096000 audit[5570]: USER_END pid=5570 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.096000 audit[5570]: CRED_DISP pid=5570 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.102000 audit[5590]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:35:36.102000 audit[5590]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcdb7608e0 a2=0 a3=0 items=0 ppid=3030 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:36.102000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:35:36.105388 systemd[1]: Started sshd@29-10.0.0.15:22-10.0.0.1:38472.service - OpenSSH per-connection server daemon (10.0.0.1:38472). Jan 14 01:35:36.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.15:22-10.0.0.1:38472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:36.106522 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:38468.service: Deactivated successfully. Jan 14 01:35:36.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.15:22-10.0.0.1:38468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:36.109882 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 01:35:36.116244 systemd-logind[1583]: Session 30 logged out. Waiting for processes to exit. Jan 14 01:35:36.121779 systemd-logind[1583]: Removed session 30. Jan 14 01:35:36.157000 audit[5598]: NETFILTER_CFG table=filter:142 family=2 entries=38 op=nft_register_rule pid=5598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:35:36.157000 audit[5598]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd88bf1e00 a2=0 a3=7ffd88bf1dec items=0 ppid=3030 pid=5598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:36.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:35:36.164000 audit[5598]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:35:36.164000 audit[5598]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd88bf1e00 a2=0 a3=0 items=0 ppid=3030 pid=5598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:36.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:35:36.201000 audit[5592]: USER_ACCT pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.204333 sshd[5592]: Accepted publickey for core from 10.0.0.1 port 38472 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:36.205000 audit[5592]: CRED_ACQ pid=5592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.205000 audit[5592]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8b526070 a2=3 a3=0 items=0 ppid=1 pid=5592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:36.205000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:36.209121 sshd-session[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:36.216507 systemd-logind[1583]: New session 31 of user core. Jan 14 01:35:36.225338 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 14 01:35:36.229000 audit[5592]: USER_START pid=5592 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.231000 audit[5601]: CRED_ACQ pid=5601 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.595424 sshd[5601]: Connection closed by 10.0.0.1 port 38472 Jan 14 01:35:36.595857 sshd-session[5592]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:36.601000 audit[5592]: USER_END pid=5592 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.602000 audit[5592]: CRED_DISP pid=5592 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.610443 systemd[1]: sshd@29-10.0.0.15:22-10.0.0.1:38472.service: Deactivated successfully. Jan 14 01:35:36.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.15:22-10.0.0.1:38472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:36.613773 systemd[1]: session-31.scope: Deactivated successfully. Jan 14 01:35:36.620278 systemd-logind[1583]: Session 31 logged out. Waiting for processes to exit. Jan 14 01:35:36.629600 systemd[1]: Started sshd@30-10.0.0.15:22-10.0.0.1:38488.service - OpenSSH per-connection server daemon (10.0.0.1:38488). Jan 14 01:35:36.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.15:22-10.0.0.1:38488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:36.633856 systemd-logind[1583]: Removed session 31. Jan 14 01:35:36.746000 audit[5614]: USER_ACCT pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.747349 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 38488 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:36.747000 audit[5614]: CRED_ACQ pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.747000 audit[5614]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc615ee610 a2=3 a3=0 items=0 ppid=1 pid=5614 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:36.747000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:36.750250 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:36.759276 systemd-logind[1583]: New session 32 of user core. Jan 14 01:35:36.767666 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 14 01:35:36.773000 audit[5614]: USER_START pid=5614 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.776000 audit[5618]: CRED_ACQ pid=5618 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.890610 sshd[5618]: Connection closed by 10.0.0.1 port 38488 Jan 14 01:35:36.890996 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:36.891000 audit[5614]: USER_END pid=5614 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.891000 audit[5614]: CRED_DISP pid=5614 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:36.898197 systemd[1]: sshd@30-10.0.0.15:22-10.0.0.1:38488.service: Deactivated successfully. Jan 14 01:35:36.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.15:22-10.0.0.1:38488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:36.901424 systemd[1]: session-32.scope: Deactivated successfully. Jan 14 01:35:36.905543 systemd-logind[1583]: Session 32 logged out. Waiting for processes to exit. Jan 14 01:35:36.908619 systemd-logind[1583]: Removed session 32. Jan 14 01:35:37.098306 kubelet[2869]: E0114 01:35:37.097779 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:35:37.245026 update_engine[1586]: I20260114 01:35:37.244041 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:35:37.245026 update_engine[1586]: I20260114 01:35:37.244171 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:35:37.245026 update_engine[1586]: I20260114 01:35:37.244801 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:35:37.261727 update_engine[1586]: E20260114 01:35:37.261615 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 01:35:37.261876 update_engine[1586]: I20260114 01:35:37.261742 1586 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 01:35:38.097778 containerd[1601]: time="2026-01-14T01:35:38.097685499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:35:38.172816 containerd[1601]: time="2026-01-14T01:35:38.172684181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:35:38.174546 containerd[1601]: time="2026-01-14T01:35:38.174378067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:35:38.174546 containerd[1601]: time="2026-01-14T01:35:38.174416422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:35:38.174853 kubelet[2869]: E0114 01:35:38.174698 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:35:38.174853 kubelet[2869]: E0114 01:35:38.174755 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:35:38.175451 kubelet[2869]: E0114 01:35:38.175021 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7hrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:35:38.177463 containerd[1601]: time="2026-01-14T01:35:38.177424152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:35:38.245960 containerd[1601]: time="2026-01-14T01:35:38.245641340Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:35:38.247522 containerd[1601]: time="2026-01-14T01:35:38.247395038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:35:38.247522 containerd[1601]: time="2026-01-14T01:35:38.247491849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:35:38.248172 kubelet[2869]: E0114 01:35:38.248062 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:35:38.248172 kubelet[2869]: E0114 01:35:38.248145 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:35:38.248373 kubelet[2869]: E0114 01:35:38.248295 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7hrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9jt56_calico-system(a92d2670-8bc7-4318-8d73-b12be2d0a45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:35:38.250610 kubelet[2869]: E0114 01:35:38.249801 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:35:40.097269 kubelet[2869]: E0114 01:35:40.097187 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:35:41.908779 systemd[1]: Started sshd@31-10.0.0.15:22-10.0.0.1:38518.service - OpenSSH per-connection server daemon (10.0.0.1:38518). Jan 14 01:35:41.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.15:22-10.0.0.1:38518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:41.911578 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 14 01:35:41.911716 kernel: audit: type=1130 audit(1768354541.908:928): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.15:22-10.0.0.1:38518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:42.015000 audit[5631]: USER_ACCT pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:42.020366 sshd-session[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:42.023587 sshd[5631]: Accepted publickey for core from 10.0.0.1 port 38518 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:42.017000 audit[5631]: CRED_ACQ pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:42.029545 systemd-logind[1583]: New session 33 of user core. Jan 14 01:35:42.036776 kernel: audit: type=1101 audit(1768354542.015:929): pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:42.036846 kernel: audit: type=1103 audit(1768354542.017:930): pid=5631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:42.036884 kernel: audit: type=1006 audit(1768354542.017:931): pid=5631 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 14 01:35:42.017000 audit[5631]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6815dd90 a2=3 a3=0 items=0 ppid=1 pid=5631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:42.062216 kernel: audit: type=1300 audit(1768354542.017:931): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6815dd90 a2=3 a3=0 items=0 ppid=1 pid=5631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:42.062833 kernel: audit: type=1327 audit(1768354542.017:931): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:42.017000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:42.072879 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 14 01:35:42.081000 audit[5631]: USER_START pid=5631 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:42.098290 containerd[1601]: time="2026-01-14T01:35:42.098250403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:35:42.085000 audit[5635]: CRED_ACQ pid=5635 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:42.111971 kernel: audit: type=1105 audit(1768354542.081:932): pid=5631 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:42.112142 kernel: audit: type=1103 audit(1768354542.085:933): pid=5635 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:42.174962 containerd[1601]: time="2026-01-14T01:35:42.174050100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:35:42.177074 containerd[1601]: time="2026-01-14T01:35:42.176882715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:35:42.177413 containerd[1601]: time="2026-01-14T01:35:42.177190598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:35:42.178411 kubelet[2869]: E0114 01:35:42.178087 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:35:42.178411 kubelet[2869]: E0114 01:35:42.178155 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:35:42.178411 kubelet[2869]: E0114 01:35:42.178339 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj6bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-546579f487-48d5w_calico-system(35648de2-563a-403b-bdd1-f0409de12a27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:35:42.180423 kubelet[2869]: E0114 01:35:42.180148 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-546579f487-48d5w" podUID="35648de2-563a-403b-bdd1-f0409de12a27" Jan 14 01:35:42.230607 sshd[5635]: Connection closed by 10.0.0.1 port 38518 Jan 14 01:35:42.231146 sshd-session[5631]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:42.232000 audit[5631]: USER_END pid=5631 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:42.238236 systemd[1]: sshd@31-10.0.0.15:22-10.0.0.1:38518.service: Deactivated successfully. Jan 14 01:35:42.240219 systemd-logind[1583]: Session 33 logged out. Waiting for processes to exit. Jan 14 01:35:42.242875 systemd[1]: session-33.scope: Deactivated successfully. Jan 14 01:35:42.244973 kernel: audit: type=1106 audit(1768354542.232:934): pid=5631 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:42.232000 audit[5631]: CRED_DISP pid=5631 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:42.247771 systemd-logind[1583]: Removed session 33. Jan 14 01:35:42.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.15:22-10.0.0.1:38518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:42.259136 kernel: audit: type=1104 audit(1768354542.232:935): pid=5631 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:44.096628 kubelet[2869]: E0114 01:35:44.096008 2869 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:35:44.101113 containerd[1601]: time="2026-01-14T01:35:44.101032459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:35:44.166016 containerd[1601]: time="2026-01-14T01:35:44.164946235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:35:44.167336 containerd[1601]: time="2026-01-14T01:35:44.167196887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:35:44.167710 containerd[1601]: time="2026-01-14T01:35:44.167431205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:35:44.168297 kubelet[2869]: E0114 01:35:44.168174 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:35:44.168297 kubelet[2869]: E0114 01:35:44.168273 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:35:44.168570 kubelet[2869]: E0114 01:35:44.168449 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9f5rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77c46b477-wkc27_calico-apiserver(c32ecf43-33bb-4f07-8af2-75af73cd7967): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:35:44.170067 kubelet[2869]: E0114 01:35:44.170010 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-wkc27" podUID="c32ecf43-33bb-4f07-8af2-75af73cd7967" Jan 14 01:35:44.605000 audit[5676]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:35:44.605000 audit[5676]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1b0f17d0 a2=0 a3=7ffc1b0f17bc items=0 ppid=3030 pid=5676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:44.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:35:44.623000 audit[5676]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=5676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:35:44.623000 audit[5676]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc1b0f17d0 a2=0 a3=7ffc1b0f17bc items=0 ppid=3030 pid=5676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:44.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:35:47.249100 update_engine[1586]: I20260114 01:35:47.248970 1586 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:35:47.249100 update_engine[1586]: I20260114 01:35:47.249068 1586 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:35:47.249798 update_engine[1586]: I20260114 01:35:47.249745 1586 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:35:47.253837 systemd[1]: Started sshd@32-10.0.0.15:22-10.0.0.1:53584.service - OpenSSH per-connection server daemon (10.0.0.1:53584). Jan 14 01:35:47.263040 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:35:47.263162 kernel: audit: type=1130 audit(1768354547.253:939): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.15:22-10.0.0.1:53584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:47.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.15:22-10.0.0.1:53584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:47.268976 update_engine[1586]: E20260114 01:35:47.268780 1586 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 01:35:47.268976 update_engine[1586]: I20260114 01:35:47.268937 1586 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 01:35:47.345876 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 53584 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:47.344000 audit[5679]: USER_ACCT pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.349984 sshd-session[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:47.347000 audit[5679]: CRED_ACQ pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.369129 systemd-logind[1583]: New session 34 of user core. Jan 14 01:35:47.377331 kernel: audit: type=1101 audit(1768354547.344:940): pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.377440 kernel: audit: type=1103 audit(1768354547.347:941): pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.383977 kernel: audit: type=1006 audit(1768354547.347:942): pid=5679 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 14 01:35:47.347000 audit[5679]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd681fcd30 a2=3 a3=0 items=0 ppid=1 pid=5679 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:47.408377 kernel: audit: type=1300 audit(1768354547.347:942): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd681fcd30 a2=3 a3=0 items=0 ppid=1 pid=5679 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:47.408548 kernel: audit: type=1327 audit(1768354547.347:942): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:47.347000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:47.409574 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 14 01:35:47.435042 kernel: audit: type=1105 audit(1768354547.418:943): pid=5679 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.418000 audit[5679]: USER_START pid=5679 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.421000 audit[5683]: CRED_ACQ pid=5683 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.446161 kernel: audit: type=1103 audit(1768354547.421:944): pid=5683 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.552243 sshd[5683]: Connection closed by 10.0.0.1 port 53584 Jan 14 01:35:47.554108 sshd-session[5679]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:47.557000 audit[5679]: USER_END pid=5679 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.565753 systemd[1]: sshd@32-10.0.0.15:22-10.0.0.1:53584.service: Deactivated successfully. Jan 14 01:35:47.566972 systemd-logind[1583]: Session 34 logged out. Waiting for processes to exit. Jan 14 01:35:47.570281 systemd[1]: session-34.scope: Deactivated successfully. Jan 14 01:35:47.574602 systemd-logind[1583]: Removed session 34. Jan 14 01:35:47.576200 kernel: audit: type=1106 audit(1768354547.557:945): pid=5679 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.576354 kernel: audit: type=1104 audit(1768354547.558:946): pid=5679 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.558000 audit[5679]: CRED_DISP pid=5679 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:47.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.15:22-10.0.0.1:53584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:49.105102 kubelet[2869]: E0114 01:35:49.104575 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-b769697d-jcx4g" podUID="1f7ed930-9020-4e7b-a11b-c469857f7fe1" Jan 14 01:35:51.097962 kubelet[2869]: E0114 01:35:51.097856 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77c46b477-q67mc" podUID="ff2a83bd-ca30-4810-bc00-617909aaca25" Jan 14 01:35:51.102174 containerd[1601]: time="2026-01-14T01:35:51.101176024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:35:51.179942 containerd[1601]: time="2026-01-14T01:35:51.179826669Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:35:51.182151 containerd[1601]: time="2026-01-14T01:35:51.181966837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:35:51.182151 containerd[1601]: time="2026-01-14T01:35:51.182060892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:35:51.182425 kubelet[2869]: E0114 01:35:51.182385 2869 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:35:51.182481 kubelet[2869]: E0114 01:35:51.182441 2869 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:35:51.182976 kubelet[2869]: E0114 01:35:51.182584 2869 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x25n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5vwfg_calico-system(1821a0db-e895-49f0-8081-ae8dd6cf61e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:35:51.184407 kubelet[2869]: E0114 01:35:51.184257 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5vwfg" podUID="1821a0db-e895-49f0-8081-ae8dd6cf61e7" Jan 14 01:35:52.099443 kubelet[2869]: E0114 01:35:52.099338 2869 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9jt56" podUID="a92d2670-8bc7-4318-8d73-b12be2d0a45e" Jan 14 01:35:52.636331 systemd[1]: Started sshd@33-10.0.0.15:22-10.0.0.1:53606.service - OpenSSH per-connection server daemon (10.0.0.1:53606). Jan 14 01:35:52.647978 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:35:52.648307 kernel: audit: type=1130 audit(1768354552.635:948): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.15:22-10.0.0.1:53606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:52.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.15:22-10.0.0.1:53606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:35:52.729824 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 53606 ssh2: RSA SHA256:O2LeM+teVAk+oeuoUBUuLpTXsaYBDCp4nV9wIZaPA9M Jan 14 01:35:52.728000 audit[5696]: USER_ACCT pid=5696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.733083 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:35:52.743150 systemd-logind[1583]: New session 35 of user core. Jan 14 01:35:52.730000 audit[5696]: CRED_ACQ pid=5696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.770504 kernel: audit: type=1101 audit(1768354552.728:949): pid=5696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.770657 kernel: audit: type=1103 audit(1768354552.730:950): pid=5696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.730000 audit[5696]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd725295f0 a2=3 a3=0 items=0 ppid=1 pid=5696 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:52.779780 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 14 01:35:52.793252 kernel: audit: type=1006 audit(1768354552.730:951): pid=5696 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 14 01:35:52.794802 kernel: audit: type=1300 audit(1768354552.730:951): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd725295f0 a2=3 a3=0 items=0 ppid=1 pid=5696 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:35:52.795063 kernel: audit: type=1327 audit(1768354552.730:951): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:52.730000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:35:52.810000 audit[5696]: USER_START pid=5696 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.825997 kernel: audit: type=1105 audit(1768354552.810:952): pid=5696 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.813000 audit[5700]: CRED_ACQ pid=5700 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.840223 kernel: audit: type=1103 audit(1768354552.813:953): pid=5700 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.923291 sshd[5700]: Connection closed by 10.0.0.1 port 53606 Jan 14 01:35:52.923549 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Jan 14 01:35:52.926000 audit[5696]: USER_END pid=5696 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.932163 systemd[1]: sshd@33-10.0.0.15:22-10.0.0.1:53606.service: Deactivated successfully. Jan 14 01:35:52.935573 systemd[1]: session-35.scope: Deactivated successfully. Jan 14 01:35:52.937261 systemd-logind[1583]: Session 35 logged out. Waiting for processes to exit. Jan 14 01:35:52.926000 audit[5696]: CRED_DISP pid=5696 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.939886 systemd-logind[1583]: Removed session 35. Jan 14 01:35:52.947838 kernel: audit: type=1106 audit(1768354552.926:954): pid=5696 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.948101 kernel: audit: type=1104 audit(1768354552.926:955): pid=5696 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:35:52.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.15:22-10.0.0.1:53606 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'