Jan 14 01:06:49.364211 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:15:29 -00 2026 Jan 14 01:06:49.364287 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 01:06:49.364300 kernel: BIOS-provided physical RAM map: Jan 14 01:06:49.364313 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 01:06:49.364323 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 14 01:06:49.364333 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 14 01:06:49.364345 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 14 01:06:49.364354 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 14 01:06:49.364363 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 14 01:06:49.364373 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 14 01:06:49.364383 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 14 01:06:49.364397 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 14 01:06:49.364407 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 14 01:06:49.364417 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 14 01:06:49.364429 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 14 01:06:49.364440 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 14 01:06:49.364454 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 14 01:06:49.364464 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 14 01:06:49.364475 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 14 01:06:49.364485 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 14 01:06:49.364496 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 14 01:06:49.364506 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 14 01:06:49.364590 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 14 01:06:49.364602 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 01:06:49.364612 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 14 01:06:49.364623 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 01:06:49.364637 kernel: NX (Execute Disable) protection: active Jan 14 01:06:49.364648 kernel: APIC: Static calls initialized Jan 14 01:06:49.364659 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 14 01:06:49.364669 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 14 01:06:49.364679 kernel: extended physical RAM map: Jan 14 01:06:49.364690 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 01:06:49.364700 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 14 01:06:49.364711 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 14 01:06:49.364721 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 14 01:06:49.364731 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 14 01:06:49.364742 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 14 01:06:49.364755 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 14 01:06:49.364766 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 14 01:06:49.364776 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 14 01:06:49.364792 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 14 01:06:49.364805 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 14 01:06:49.364816 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 14 01:06:49.364828 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 14 01:06:49.364839 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 14 01:06:49.364850 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 14 01:06:49.364861 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 14 01:06:49.364872 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 14 01:06:49.364882 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 14 01:06:49.364893 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 14 01:06:49.364907 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 14 01:06:49.364918 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 14 01:06:49.364929 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 14 01:06:49.364940 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 14 01:06:49.364951 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 14 01:06:49.364964 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 01:06:49.364976 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 14 01:06:49.364986 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 14 01:06:49.364996 kernel: efi: EFI v2.7 by EDK II Jan 14 01:06:49.365005 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 14 01:06:49.365015 kernel: random: crng init done Jan 14 01:06:49.365028 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 14 01:06:49.365038 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 14 01:06:49.365050 kernel: secureboot: Secure boot disabled Jan 14 01:06:49.365063 kernel: SMBIOS 2.8 present. Jan 14 01:06:49.365073 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 14 01:06:49.365082 kernel: DMI: Memory slots populated: 1/1 Jan 14 01:06:49.365091 kernel: Hypervisor detected: KVM Jan 14 01:06:49.365101 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 14 01:06:49.365110 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 01:06:49.365120 kernel: kvm-clock: using sched offset of 14933318794 cycles Jan 14 01:06:49.365131 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 01:06:49.365149 kernel: tsc: Detected 2445.426 MHz processor Jan 14 01:06:49.365161 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:06:49.365173 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:06:49.365185 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 14 01:06:49.365198 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 01:06:49.365212 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:06:49.365274 kernel: Using GB pages for direct mapping Jan 14 01:06:49.365292 kernel: ACPI: Early table checksum verification disabled Jan 14 01:06:49.365305 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 14 01:06:49.365316 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 14 01:06:49.365325 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:06:49.365336 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:06:49.365346 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 14 01:06:49.365356 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:06:49.365585 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:06:49.365602 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:06:49.365613 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:06:49.365623 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 14 01:06:49.365633 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 14 01:06:49.365643 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 14 01:06:49.365653 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 14 01:06:49.365671 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 14 01:06:49.365684 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 14 01:06:49.365694 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 14 01:06:49.365704 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 14 01:06:49.365714 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 14 01:06:49.365724 kernel: No NUMA configuration found Jan 14 01:06:49.365734 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 14 01:06:49.365748 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 14 01:06:49.365761 kernel: Zone ranges: Jan 14 01:06:49.365775 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:06:49.365785 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 14 01:06:49.365795 kernel: Normal empty Jan 14 01:06:49.365805 kernel: Device empty Jan 14 01:06:49.365815 kernel: Movable zone start for each node Jan 14 01:06:49.365825 kernel: Early memory node ranges Jan 14 01:06:49.365840 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 01:06:49.365853 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 14 01:06:49.365863 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 14 01:06:49.365873 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 14 01:06:49.365883 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 14 01:06:49.365892 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 14 01:06:49.365903 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 14 01:06:49.365918 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 14 01:06:49.365932 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 14 01:06:49.365942 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:06:49.365961 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 01:06:49.365975 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 14 01:06:49.365985 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:06:49.365996 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 14 01:06:49.366010 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 14 01:06:49.366022 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 14 01:06:49.366033 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 14 01:06:49.366051 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 14 01:06:49.366064 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 01:06:49.366076 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 01:06:49.366088 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 01:06:49.366105 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 01:06:49.366116 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 01:06:49.366129 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:06:49.366141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 01:06:49.366153 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 01:06:49.366165 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:06:49.366177 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 14 01:06:49.366192 kernel: TSC deadline timer available Jan 14 01:06:49.366204 kernel: CPU topo: Max. logical packages: 1 Jan 14 01:06:49.366216 kernel: CPU topo: Max. logical dies: 1 Jan 14 01:06:49.366275 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:06:49.366287 kernel: CPU topo: Max. threads per core: 1 Jan 14 01:06:49.366299 kernel: CPU topo: Num. cores per package: 4 Jan 14 01:06:49.366311 kernel: CPU topo: Num. threads per package: 4 Jan 14 01:06:49.366326 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 14 01:06:49.366338 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 01:06:49.366350 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 14 01:06:49.366362 kernel: kvm-guest: setup PV sched yield Jan 14 01:06:49.366374 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 14 01:06:49.366386 kernel: Booting paravirtualized kernel on KVM Jan 14 01:06:49.366398 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:06:49.366411 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 14 01:06:49.366425 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 14 01:06:49.366437 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 14 01:06:49.366449 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 14 01:06:49.366461 kernel: kvm-guest: PV spinlocks enabled Jan 14 01:06:49.366473 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 01:06:49.366486 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 01:06:49.366501 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 01:06:49.366586 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 01:06:49.366599 kernel: Fallback order for Node 0: 0 Jan 14 01:06:49.366612 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 14 01:06:49.366623 kernel: Policy zone: DMA32 Jan 14 01:06:49.366636 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:06:49.366648 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 01:06:49.366664 kernel: ftrace: allocating 40097 entries in 157 pages Jan 14 01:06:49.366676 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:06:49.366687 kernel: Dynamic Preempt: voluntary Jan 14 01:06:49.366699 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:06:49.366712 kernel: rcu: RCU event tracing is enabled. Jan 14 01:06:49.366724 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 01:06:49.366736 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:06:49.366750 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:06:49.366767 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:06:49.366777 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:06:49.366788 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 01:06:49.366799 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:06:49.366809 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:06:49.366820 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 01:06:49.366831 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 14 01:06:49.366848 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:06:49.366861 kernel: Console: colour dummy device 80x25 Jan 14 01:06:49.366875 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:06:49.366885 kernel: ACPI: Core revision 20240827 Jan 14 01:06:49.366896 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 14 01:06:49.366907 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:06:49.366917 kernel: x2apic enabled Jan 14 01:06:49.366928 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:06:49.366944 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 14 01:06:49.366958 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 14 01:06:49.366968 kernel: kvm-guest: setup PV IPIs Jan 14 01:06:49.366979 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 14 01:06:49.366990 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 01:06:49.367001 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 14 01:06:49.367015 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 01:06:49.367028 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 14 01:06:49.367041 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 14 01:06:49.367053 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:06:49.367064 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 01:06:49.367076 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:06:49.367088 kernel: Speculative Store Bypass: Vulnerable Jan 14 01:06:49.367100 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 14 01:06:49.367118 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 14 01:06:49.367130 kernel: active return thunk: srso_alias_return_thunk Jan 14 01:06:49.367142 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 14 01:06:49.367154 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 14 01:06:49.367167 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 01:06:49.367179 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:06:49.367194 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:06:49.367207 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:06:49.367264 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:06:49.367277 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 14 01:06:49.367287 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:06:49.367298 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:06:49.367309 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:06:49.367327 kernel: landlock: Up and running. Jan 14 01:06:49.367339 kernel: SELinux: Initializing. Jan 14 01:06:49.367349 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 01:06:49.367360 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 01:06:49.367371 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 14 01:06:49.367382 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 14 01:06:49.367392 kernel: signal: max sigframe size: 1776 Jan 14 01:06:49.367409 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:06:49.367422 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:06:49.367433 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 01:06:49.367444 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 01:06:49.367454 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:06:49.367465 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:06:49.367476 kernel: .... node #0, CPUs: #1 #2 #3 Jan 14 01:06:49.367491 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 01:06:49.367504 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 14 01:06:49.367601 kernel: Memory: 2439052K/2565800K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15536K init, 2504K bss, 120812K reserved, 0K cma-reserved) Jan 14 01:06:49.367613 kernel: devtmpfs: initialized Jan 14 01:06:49.367624 kernel: x86/mm: Memory block size: 128MB Jan 14 01:06:49.367634 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 14 01:06:49.367645 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 14 01:06:49.367660 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 14 01:06:49.367674 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 14 01:06:49.367686 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 14 01:06:49.367697 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 14 01:06:49.367707 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:06:49.367718 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 01:06:49.367728 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:06:49.367743 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:06:49.367758 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:06:49.367769 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:06:49.367780 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:06:49.367791 kernel: audit: type=2000 audit(1768352796.471:1): state=initialized audit_enabled=0 res=1 Jan 14 01:06:49.367801 kernel: cpuidle: using governor menu Jan 14 01:06:49.367812 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:06:49.367828 kernel: dca service started, version 1.12.1 Jan 14 01:06:49.367841 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 14 01:06:49.367855 kernel: PCI: Using configuration type 1 for base access Jan 14 01:06:49.367866 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:06:49.367876 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 01:06:49.367887 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 01:06:49.367898 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:06:49.367912 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:06:49.367925 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:06:49.367938 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:06:49.367950 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:06:49.367961 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 01:06:49.367974 kernel: ACPI: Interpreter enabled Jan 14 01:06:49.367986 kernel: ACPI: PM: (supports S0 S3 S5) Jan 14 01:06:49.367998 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:06:49.368014 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:06:49.368026 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 01:06:49.368038 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 01:06:49.368050 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 01:06:49.368455 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 01:06:49.368785 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 14 01:06:49.369026 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 14 01:06:49.369043 kernel: PCI host bridge to bus 0000:00 Jan 14 01:06:49.369356 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 01:06:49.369657 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 01:06:49.369876 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 01:06:49.370095 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 14 01:06:49.370367 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 14 01:06:49.370659 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 14 01:06:49.370874 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 01:06:49.371201 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 01:06:49.371496 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 14 01:06:49.372391 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 14 01:06:49.372722 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 14 01:06:49.372975 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 14 01:06:49.373268 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 01:06:49.373606 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 14 01:06:49.373870 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 14 01:06:49.374119 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 14 01:06:49.374415 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 14 01:06:49.374772 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 01:06:49.374999 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 14 01:06:49.375276 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 14 01:06:49.375887 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 14 01:06:49.376210 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 01:06:49.376601 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 14 01:06:49.376872 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 14 01:06:49.377128 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 14 01:06:49.377420 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 14 01:06:49.377746 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 01:06:49.377982 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 01:06:49.378408 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 01:06:49.378726 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 14 01:06:49.378978 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 14 01:06:49.379274 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 01:06:49.379495 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 14 01:06:49.379595 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 01:06:49.379610 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 01:06:49.379622 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 01:06:49.379633 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 01:06:49.379644 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 01:06:49.379660 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 01:06:49.379671 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 01:06:49.379683 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 01:06:49.379694 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 01:06:49.379706 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 01:06:49.379717 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 01:06:49.379728 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 01:06:49.379744 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 01:06:49.379755 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 01:06:49.379767 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 01:06:49.379778 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 01:06:49.379789 kernel: iommu: Default domain type: Translated Jan 14 01:06:49.379800 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:06:49.379811 kernel: efivars: Registered efivars operations Jan 14 01:06:49.379825 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:06:49.379836 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 01:06:49.379848 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 14 01:06:49.379859 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 14 01:06:49.379869 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 14 01:06:49.379881 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 14 01:06:49.379892 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 14 01:06:49.379905 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 14 01:06:49.379917 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 14 01:06:49.379929 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 14 01:06:49.380153 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 01:06:49.380440 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 01:06:49.380806 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 01:06:49.380826 kernel: vgaarb: loaded Jan 14 01:06:49.380846 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 14 01:06:49.380859 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 14 01:06:49.380871 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 01:06:49.380885 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:06:49.380897 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:06:49.380910 kernel: pnp: PnP ACPI init Jan 14 01:06:49.381408 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 14 01:06:49.381434 kernel: pnp: PnP ACPI: found 6 devices Jan 14 01:06:49.381448 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:06:49.381461 kernel: NET: Registered PF_INET protocol family Jan 14 01:06:49.381475 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 01:06:49.381487 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 01:06:49.381499 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:06:49.381581 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 01:06:49.381617 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 01:06:49.381634 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 01:06:49.381647 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 01:06:49.381660 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 01:06:49.381673 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:06:49.381687 kernel: NET: Registered PF_XDP protocol family Jan 14 01:06:49.381946 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 14 01:06:49.382198 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 14 01:06:49.382464 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 01:06:49.382759 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 01:06:49.382969 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 01:06:49.383192 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 14 01:06:49.383466 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 14 01:06:49.383778 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 14 01:06:49.383798 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:06:49.383812 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 14 01:06:49.383827 kernel: Initialise system trusted keyrings Jan 14 01:06:49.383840 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 01:06:49.383853 kernel: Key type asymmetric registered Jan 14 01:06:49.383870 kernel: Asymmetric key parser 'x509' registered Jan 14 01:06:49.383883 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:06:49.383896 kernel: io scheduler mq-deadline registered Jan 14 01:06:49.383912 kernel: io scheduler kyber registered Jan 14 01:06:49.383924 kernel: io scheduler bfq registered Jan 14 01:06:49.383937 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:06:49.383950 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 01:06:49.383966 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 01:06:49.383979 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 01:06:49.384040 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:06:49.384051 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:06:49.384065 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 01:06:49.384081 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 01:06:49.384096 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 01:06:49.384404 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 14 01:06:49.384426 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 01:06:49.384729 kernel: rtc_cmos 00:04: registered as rtc0 Jan 14 01:06:49.384952 kernel: rtc_cmos 00:04: setting system clock to 2026-01-14T01:06:44 UTC (1768352804) Jan 14 01:06:49.385173 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 14 01:06:49.385190 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 14 01:06:49.385203 kernel: efifb: probing for efifb Jan 14 01:06:49.385216 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 14 01:06:49.385283 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 14 01:06:49.385294 kernel: efifb: scrolling: redraw Jan 14 01:06:49.385305 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 01:06:49.385321 kernel: Console: switching to colour frame buffer device 160x50 Jan 14 01:06:49.385334 kernel: fb0: EFI VGA frame buffer device Jan 14 01:06:49.385347 kernel: pstore: Using crash dump compression: deflate Jan 14 01:06:49.385359 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 01:06:49.385370 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:06:49.385384 kernel: Segment Routing with IPv6 Jan 14 01:06:49.385398 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:06:49.385416 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:06:49.385427 kernel: Key type dns_resolver registered Jan 14 01:06:49.385438 kernel: IPI shorthand broadcast: enabled Jan 14 01:06:49.385449 kernel: sched_clock: Marking stable (4801023438, 4000535320)->(10358068705, -1556509947) Jan 14 01:06:49.385460 kernel: registered taskstats version 1 Jan 14 01:06:49.385471 kernel: Loading compiled-in X.509 certificates Jan 14 01:06:49.385484 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 58a78462583b088d099087e6f2d97e37d80e06bb' Jan 14 01:06:49.385498 kernel: Demotion targets for Node 0: null Jan 14 01:06:49.385595 kernel: Key type .fscrypt registered Jan 14 01:06:49.385608 kernel: Key type fscrypt-provisioning registered Jan 14 01:06:49.385620 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 01:06:49.385630 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:06:49.385641 kernel: ima: No architecture policies found Jan 14 01:06:49.385653 kernel: clk: Disabling unused clocks Jan 14 01:06:49.385670 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:06:49.385684 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:06:49.385695 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 14 01:06:49.385706 kernel: Run /init as init process Jan 14 01:06:49.385717 kernel: with arguments: Jan 14 01:06:49.385728 kernel: /init Jan 14 01:06:49.385739 kernel: with environment: Jan 14 01:06:49.385752 kernel: HOME=/ Jan 14 01:06:49.385770 kernel: TERM=linux Jan 14 01:06:49.385782 kernel: SCSI subsystem initialized Jan 14 01:06:49.385793 kernel: libata version 3.00 loaded. Jan 14 01:06:49.386035 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 01:06:49.386055 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 01:06:49.386361 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 01:06:49.386796 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 01:06:49.387345 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 01:06:49.387831 kernel: scsi host0: ahci Jan 14 01:06:49.388094 kernel: scsi host1: ahci Jan 14 01:06:49.388439 kernel: scsi host2: ahci Jan 14 01:06:49.388766 kernel: scsi host3: ahci Jan 14 01:06:49.389084 kernel: scsi host4: ahci Jan 14 01:06:49.389407 kernel: scsi host5: ahci Jan 14 01:06:49.389428 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 14 01:06:49.389440 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 14 01:06:49.389451 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 14 01:06:49.389463 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 14 01:06:49.389481 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 14 01:06:49.389496 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 14 01:06:49.389508 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 01:06:49.389605 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 14 01:06:49.389616 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 01:06:49.389628 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 14 01:06:49.389639 kernel: ata3.00: applying bridge limits Jan 14 01:06:49.389654 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 01:06:49.389666 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 01:06:49.389681 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 01:06:49.389692 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 01:06:49.389703 kernel: ata3.00: LPM support broken, forcing max_power Jan 14 01:06:49.389713 kernel: ata3.00: configured for UDMA/100 Jan 14 01:06:49.390364 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 01:06:49.390751 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 01:06:49.390998 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 14 01:06:49.391309 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 14 01:06:49.391327 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 01:06:49.391340 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 01:06:49.391351 kernel: GPT:16515071 != 27000831 Jan 14 01:06:49.391369 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 01:06:49.391381 kernel: GPT:16515071 != 27000831 Jan 14 01:06:49.391392 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 01:06:49.391402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 01:06:49.391747 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 14 01:06:49.391770 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:06:49.391782 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:06:49.391799 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:06:49.391810 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:06:49.391821 kernel: raid6: avx2x4 gen() 27818 MB/s Jan 14 01:06:49.391833 kernel: raid6: avx2x2 gen() 26184 MB/s Jan 14 01:06:49.391848 kernel: raid6: avx2x1 gen() 16623 MB/s Jan 14 01:06:49.391860 kernel: raid6: using algorithm avx2x4 gen() 27818 MB/s Jan 14 01:06:49.391872 kernel: raid6: .... xor() 5321 MB/s, rmw enabled Jan 14 01:06:49.391887 kernel: raid6: using avx2x2 recovery algorithm Jan 14 01:06:49.391898 kernel: xor: automatically using best checksumming function avx Jan 14 01:06:49.391909 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:06:49.391921 kernel: BTRFS: device fsid 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac devid 1 transid 33 /dev/mapper/usr (253:0) scanned by mount (181) Jan 14 01:06:49.391934 kernel: BTRFS info (device dm-0): first mount of filesystem 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac Jan 14 01:06:49.391946 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:06:49.391957 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:06:49.391971 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:06:49.391982 kernel: loop: module loaded Jan 14 01:06:49.391994 kernel: loop0: detected capacity change from 0 to 100552 Jan 14 01:06:49.392009 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:06:49.392021 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:06:49.392036 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:06:49.392051 systemd[1]: Detected virtualization kvm. Jan 14 01:06:49.392063 systemd[1]: Detected architecture x86-64. Jan 14 01:06:49.392077 systemd[1]: Running in initrd. Jan 14 01:06:49.392091 systemd[1]: No hostname configured, using default hostname. Jan 14 01:06:49.392104 systemd[1]: Hostname set to . Jan 14 01:06:49.392115 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:06:49.392127 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:06:49.392143 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:06:49.392155 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:06:49.392171 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:06:49.392187 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:06:49.392200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:06:49.392215 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:06:49.392281 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:06:49.392294 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:06:49.392306 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:06:49.392317 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:06:49.392332 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:06:49.392344 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:06:49.392360 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:06:49.392372 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:06:49.392384 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:06:49.392397 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:06:49.392412 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:06:49.392424 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:06:49.392436 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:06:49.392451 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:06:49.392463 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:06:49.392477 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:06:49.392492 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:06:49.392504 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:06:49.392596 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:06:49.392609 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:06:49.392625 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:06:49.392638 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:06:49.392653 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:06:49.392665 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:06:49.392677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:06:49.392693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:06:49.392706 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:06:49.392718 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:06:49.392732 kernel: hrtimer: interrupt took 3878571 ns Jan 14 01:06:49.392746 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:06:49.392944 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:06:49.393181 systemd-journald[317]: Collecting audit messages is enabled. Jan 14 01:06:49.393216 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:06:49.393373 kernel: audit: type=1130 audit(1768352809.387:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.393389 systemd-journald[317]: Journal started Jan 14 01:06:49.393500 systemd-journald[317]: Runtime Journal (/run/log/journal/d8fa97e79a7a4094b2c4d5b479025f24) is 6M, max 48M, 42M free. Jan 14 01:06:49.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.421651 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:06:49.458631 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:06:49.458691 kernel: audit: type=1130 audit(1768352809.450:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.484075 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:06:49.514850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:06:49.565370 kernel: audit: type=1130 audit(1768352809.521:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.535936 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:06:49.607859 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:06:49.609705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:06:49.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.654681 kernel: audit: type=1130 audit(1768352809.627:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.654767 kernel: Bridge firewalling registered Jan 14 01:06:49.657370 systemd-modules-load[320]: Inserted module 'br_netfilter' Jan 14 01:06:49.663368 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:06:49.681131 kernel: audit: type=1130 audit(1768352809.663:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.663392 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:06:49.738041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:06:49.754695 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:06:49.768130 kernel: audit: type=1130 audit(1768352809.754:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.831883 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:06:49.868142 kernel: audit: type=1130 audit(1768352809.846:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:49.859997 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:06:50.054072 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:06:50.088688 kernel: audit: type=1130 audit(1768352810.058:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:50.088975 kernel: audit: type=1334 audit(1768352810.061:10): prog-id=6 op=LOAD Jan 14 01:06:50.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:50.061000 audit: BPF prog-id=6 op=LOAD Jan 14 01:06:50.089194 dracut-cmdline[352]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 01:06:50.063670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:06:50.255447 systemd-resolved[365]: Positive Trust Anchors: Jan 14 01:06:50.255509 systemd-resolved[365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:06:50.255608 systemd-resolved[365]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:06:50.255655 systemd-resolved[365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:06:50.371585 kernel: audit: type=1130 audit(1768352810.341:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:50.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:50.328783 systemd-resolved[365]: Defaulting to hostname 'linux'. Jan 14 01:06:50.331076 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:06:50.341859 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:06:50.510701 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:06:50.597202 kernel: iscsi: registered transport (tcp) Jan 14 01:06:50.715503 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:06:50.716655 kernel: QLogic iSCSI HBA Driver Jan 14 01:06:50.851083 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:06:50.921491 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:06:50.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:50.957219 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:06:51.278673 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:06:51.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:51.289631 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:06:51.312986 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:06:51.433801 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:06:51.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:51.451000 audit: BPF prog-id=7 op=LOAD Jan 14 01:06:51.451000 audit: BPF prog-id=8 op=LOAD Jan 14 01:06:51.455815 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:06:51.537111 systemd-udevd[597]: Using default interface naming scheme 'v257'. Jan 14 01:06:51.558459 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:06:51.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:51.574626 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:06:51.751787 dracut-pre-trigger[648]: rd.md=0: removing MD RAID activation Jan 14 01:06:51.956922 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:06:51.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:51.968000 audit: BPF prog-id=9 op=LOAD Jan 14 01:06:51.977705 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:06:52.232706 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:06:52.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:52.244718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:06:52.293959 systemd-networkd[707]: lo: Link UP Jan 14 01:06:52.294026 systemd-networkd[707]: lo: Gained carrier Jan 14 01:06:52.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:52.295773 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:06:52.319863 systemd[1]: Reached target network.target - Network. Jan 14 01:06:52.461731 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:06:52.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:52.470204 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:06:52.858691 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 01:06:52.896187 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 01:06:52.924468 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 01:06:52.951301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:06:52.970697 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:06:52.963024 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:06:53.024333 disk-uuid[771]: Primary Header is updated. Jan 14 01:06:53.024333 disk-uuid[771]: Secondary Entries is updated. Jan 14 01:06:53.024333 disk-uuid[771]: Secondary Header is updated. Jan 14 01:06:53.041644 kernel: AES CTR mode by8 optimization enabled Jan 14 01:06:53.066356 systemd-networkd[707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:06:53.066586 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:06:53.071482 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:06:53.071695 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:06:53.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:53.110614 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:06:53.124180 systemd-networkd[707]: eth0: Link UP Jan 14 01:06:53.161869 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 14 01:06:53.124674 systemd-networkd[707]: eth0: Gained carrier Jan 14 01:06:53.124695 systemd-networkd[707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:06:53.135350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:06:53.317801 systemd-networkd[707]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 01:06:53.344108 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:06:53.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:53.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:53.345621 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:06:53.358035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:06:53.467681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:06:53.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:53.571497 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:06:53.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:53.586643 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:06:53.603775 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:06:53.618968 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:06:53.648288 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:06:53.706355 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:06:53.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:54.427883 disk-uuid[773]: Warning: The kernel is still using the old partition table. Jan 14 01:06:54.427883 disk-uuid[773]: The new table will be used at the next reboot or after you Jan 14 01:06:54.427883 disk-uuid[773]: run partprobe(8) or kpartx(8) Jan 14 01:06:54.427883 disk-uuid[773]: The operation has completed successfully. Jan 14 01:06:54.476174 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:06:54.541505 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 14 01:06:54.541633 kernel: audit: type=1130 audit(1768352814.489:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:54.541654 kernel: audit: type=1131 audit(1768352814.489:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:54.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:54.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:54.476613 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:06:54.503663 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:06:54.846923 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (866) Jan 14 01:06:54.860025 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:06:54.860156 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:06:54.888937 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:06:54.889106 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:06:54.925658 kernel: BTRFS info (device vda6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:06:54.930653 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:06:54.954177 kernel: audit: type=1130 audit(1768352814.929:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:54.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:54.934905 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:06:55.173929 systemd-networkd[707]: eth0: Gained IPv6LL Jan 14 01:06:56.046445 ignition[885]: Ignition 2.24.0 Jan 14 01:06:56.046660 ignition[885]: Stage: fetch-offline Jan 14 01:06:56.046831 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:06:56.046887 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:06:56.047110 ignition[885]: parsed url from cmdline: "" Jan 14 01:06:56.047116 ignition[885]: no config URL provided Jan 14 01:06:56.047327 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:06:56.047346 ignition[885]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:06:56.047442 ignition[885]: op(1): [started] loading QEMU firmware config module Jan 14 01:06:56.047450 ignition[885]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 14 01:06:56.075741 ignition[885]: op(1): [finished] loading QEMU firmware config module Jan 14 01:06:56.641777 ignition[885]: parsing config with SHA512: d713afac8527c6fda78b598a18e899aaba8c476d438cdd7a60f5e9ea7f94dfcabbffa6cf35c086b93dbaa010a4166c5760d066cdc1d93a342f4e1bd3be5326b4 Jan 14 01:06:56.725446 unknown[885]: fetched base config from "system" Jan 14 01:06:56.725473 unknown[885]: fetched user config from "qemu" Jan 14 01:06:56.740230 ignition[885]: fetch-offline: fetch-offline passed Jan 14 01:06:56.740706 ignition[885]: Ignition finished successfully Jan 14 01:06:56.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:56.746166 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:06:56.767671 kernel: audit: type=1130 audit(1768352816.752:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:56.754373 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 14 01:06:56.756323 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:06:56.887459 ignition[896]: Ignition 2.24.0 Jan 14 01:06:56.887507 ignition[896]: Stage: kargs Jan 14 01:06:56.887819 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:06:56.887833 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:06:56.888818 ignition[896]: kargs: kargs passed Jan 14 01:06:56.888868 ignition[896]: Ignition finished successfully Jan 14 01:06:56.942729 kernel: audit: type=1130 audit(1768352816.912:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:56.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:56.910774 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:06:56.919847 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:06:57.048054 ignition[904]: Ignition 2.24.0 Jan 14 01:06:57.048101 ignition[904]: Stage: disks Jan 14 01:06:57.048377 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:06:57.048396 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:06:58.423582 ignition[904]: disks: disks passed Jan 14 01:06:58.425010 ignition[904]: Ignition finished successfully Jan 14 01:06:58.435619 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:06:58.459909 kernel: audit: type=1130 audit(1768352818.442:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:58.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:58.444128 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:06:58.466930 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:06:58.474739 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:06:58.506190 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:06:58.514648 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:06:58.532255 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:06:58.625977 systemd-fsck[914]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 14 01:06:58.633027 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:06:58.657647 kernel: audit: type=1130 audit(1768352818.639:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:58.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:58.665662 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:06:58.953898 kernel: EXT4-fs (vda9): mounted filesystem 6efdc615-0e3c-4caf-8d0b-1f38e5c59ef0 r/w with ordered data mode. Quota mode: none. Jan 14 01:06:58.955072 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:06:58.956608 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:06:58.973124 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:06:58.989641 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:06:58.998174 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 01:06:58.998308 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:06:58.998338 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:06:59.036794 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:06:59.066054 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Jan 14 01:06:59.066085 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:06:59.066101 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:06:59.070714 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:06:59.094462 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:06:59.094501 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:06:59.098701 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:06:59.802193 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:06:59.832506 kernel: audit: type=1130 audit(1768352819.811:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:59.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:59.827115 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:06:59.869649 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:06:59.908915 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:06:59.920694 kernel: BTRFS info (device vda6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:06:59.962742 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:06:59.981363 kernel: audit: type=1130 audit(1768352819.967:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:06:59.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:00.047622 ignition[1020]: INFO : Ignition 2.24.0 Jan 14 01:07:00.047622 ignition[1020]: INFO : Stage: mount Jan 14 01:07:00.058657 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:07:00.058657 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:07:00.058657 ignition[1020]: INFO : mount: mount passed Jan 14 01:07:00.058657 ignition[1020]: INFO : Ignition finished successfully Jan 14 01:07:00.085314 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:07:00.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:00.110914 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:07:00.136508 kernel: audit: type=1130 audit(1768352820.105:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:00.175792 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:07:00.239656 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1032) Jan 14 01:07:00.255344 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:07:00.255398 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:07:00.295895 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:07:00.295975 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:07:00.303437 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:07:00.381440 ignition[1049]: INFO : Ignition 2.24.0 Jan 14 01:07:00.381440 ignition[1049]: INFO : Stage: files Jan 14 01:07:00.390150 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:07:00.390150 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:07:00.411190 ignition[1049]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:07:00.423561 ignition[1049]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:07:00.423561 ignition[1049]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:07:00.444313 ignition[1049]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:07:00.452752 ignition[1049]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:07:00.452752 ignition[1049]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:07:00.446437 unknown[1049]: wrote ssh authorized keys file for user: core Jan 14 01:07:00.470362 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 01:07:00.470362 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 14 01:07:00.566995 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:07:00.685489 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 01:07:00.685489 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:07:00.685489 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:07:00.685489 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:07:00.685489 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:07:00.730264 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:07:00.730264 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:07:00.730264 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:07:00.730264 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:07:00.730264 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:07:00.730264 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:07:00.730264 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:07:00.730264 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:07:00.730264 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:07:00.730264 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 14 01:07:01.031238 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:07:01.559496 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:07:01.559496 ignition[1049]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:07:01.588811 ignition[1049]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:07:01.610134 ignition[1049]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:07:01.610134 ignition[1049]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:07:01.610134 ignition[1049]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 01:07:01.610134 ignition[1049]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 01:07:01.610134 ignition[1049]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 14 01:07:01.610134 ignition[1049]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 01:07:01.610134 ignition[1049]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 14 01:07:01.710004 ignition[1049]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 01:07:01.726824 ignition[1049]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 14 01:07:01.734686 ignition[1049]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 14 01:07:01.734686 ignition[1049]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:07:01.734686 ignition[1049]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:07:01.734686 ignition[1049]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:07:01.734686 ignition[1049]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:07:01.734686 ignition[1049]: INFO : files: files passed Jan 14 01:07:01.734686 ignition[1049]: INFO : Ignition finished successfully Jan 14 01:07:01.818647 kernel: audit: type=1130 audit(1768352821.758:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:01.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:01.746053 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:07:01.760844 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:07:01.781907 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:07:01.866010 kernel: audit: type=1130 audit(1768352821.836:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:01.866054 kernel: audit: type=1131 audit(1768352821.836:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:01.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:01.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:01.824747 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:07:01.870781 initrd-setup-root-after-ignition[1081]: grep: /sysroot/oem/oem-release: No such file or directory Jan 14 01:07:01.824905 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:07:01.881098 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:07:01.881098 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:07:01.897214 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:07:01.909840 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:07:01.933911 kernel: audit: type=1130 audit(1768352821.909:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:01.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:01.910704 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:07:01.943363 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:07:02.047409 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:07:02.048205 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:07:02.094030 kernel: audit: type=1130 audit(1768352822.054:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.094072 kernel: audit: type=1131 audit(1768352822.054:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.055088 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:07:02.104688 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:07:02.115914 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:07:02.118646 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:07:02.173730 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:07:02.197843 kernel: audit: type=1130 audit(1768352822.173:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.199196 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:07:02.259884 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:07:02.260454 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:07:02.266967 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:07:02.278356 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:07:02.289794 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:07:02.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.290010 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:07:02.314823 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:07:02.321006 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:07:02.332602 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:07:02.342124 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:07:02.350657 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:07:02.357504 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:07:02.368616 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:07:02.368807 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:07:02.393647 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:07:02.411175 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:07:02.426925 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:07:02.431669 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:07:02.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.431871 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:07:02.445703 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:07:02.458121 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:07:02.475743 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:07:02.488064 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:07:02.496906 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:07:02.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.497153 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:07:02.512639 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:07:02.512823 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:07:02.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.528962 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:07:02.538678 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:07:02.544498 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:07:02.552409 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:07:02.556491 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:07:02.564149 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:07:02.564341 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:07:02.580386 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:07:02.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.580507 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:07:02.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.584270 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:07:02.584476 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:07:02.603715 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:07:02.603922 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:07:02.610796 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:07:02.611001 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:07:02.616958 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:07:02.660762 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:07:02.669100 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:07:02.669417 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:07:02.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.673980 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:07:02.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.706772 ignition[1107]: INFO : Ignition 2.24.0 Jan 14 01:07:02.706772 ignition[1107]: INFO : Stage: umount Jan 14 01:07:02.706772 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:07:02.706772 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 14 01:07:02.706772 ignition[1107]: INFO : umount: umount passed Jan 14 01:07:02.706772 ignition[1107]: INFO : Ignition finished successfully Jan 14 01:07:02.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.674184 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:07:02.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.700617 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:07:02.700775 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:07:02.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.718400 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:07:02.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.718658 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:07:02.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.722717 systemd[1]: Stopped target network.target - Network. Jan 14 01:07:02.746973 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:07:02.747082 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:07:02.759612 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:07:02.759701 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:07:02.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.773819 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:07:02.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.773950 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:07:02.775588 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:07:02.880000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:07:02.775672 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:07:02.799120 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:07:02.808990 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:07:02.838337 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:07:02.840339 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:07:02.840479 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:07:02.855216 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:07:02.855433 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:07:02.872383 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:07:02.925631 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:07:02.925767 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:07:02.935135 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:07:02.941454 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:07:02.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:02.941617 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:07:02.953900 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:07:02.963863 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:07:02.964076 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:07:02.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.001717 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:07:03.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.008000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:07:03.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.001981 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:07:03.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.009868 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:07:03.010012 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:07:03.014811 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:07:03.014893 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:07:03.014984 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:07:03.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.015032 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:07:03.015667 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:07:03.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.015939 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:07:03.043854 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:07:03.043966 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:07:03.051414 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:07:03.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.051471 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:07:03.055975 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:07:03.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.056061 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:07:03.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.077496 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:07:03.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.077686 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:07:03.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.089238 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:07:03.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.089380 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:07:03.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:03.109483 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:07:03.116168 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:07:03.116250 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:07:03.129968 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:07:03.130056 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:07:03.139014 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 01:07:03.139090 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:07:03.147702 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:07:03.147781 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:07:03.155989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:07:03.156143 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:07:03.161652 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:07:03.161830 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:07:03.174921 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:07:03.175095 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:07:03.184469 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:07:03.194142 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:07:03.257276 systemd[1]: Switching root. Jan 14 01:07:03.300749 systemd-journald[317]: Journal stopped Jan 14 01:07:06.632499 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Jan 14 01:07:06.632641 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:07:06.632657 kernel: SELinux: policy capability open_perms=1 Jan 14 01:07:06.632669 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:07:06.632681 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:07:06.632692 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:07:06.632747 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:07:06.632759 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:07:06.632770 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:07:06.632781 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:07:06.632798 systemd[1]: Successfully loaded SELinux policy in 155.772ms. Jan 14 01:07:06.632821 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.092ms. Jan 14 01:07:06.632834 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:07:06.632891 systemd[1]: Detected virtualization kvm. Jan 14 01:07:06.632915 systemd[1]: Detected architecture x86-64. Jan 14 01:07:06.632934 systemd[1]: Detected first boot. Jan 14 01:07:06.632961 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:07:06.632979 zram_generator::config[1151]: No configuration found. Jan 14 01:07:06.632993 kernel: Guest personality initialized and is inactive Jan 14 01:07:06.633004 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 01:07:06.633071 kernel: Initialized host personality Jan 14 01:07:06.633094 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:07:06.633109 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:07:06.633121 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 14 01:07:06.633133 kernel: audit: type=1334 audit(1768352825.336:89): prog-id=12 op=LOAD Jan 14 01:07:06.633144 kernel: audit: type=1334 audit(1768352825.337:90): prog-id=3 op=UNLOAD Jan 14 01:07:06.633155 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:07:06.633204 kernel: audit: type=1334 audit(1768352825.337:91): prog-id=13 op=LOAD Jan 14 01:07:06.633221 kernel: audit: type=1334 audit(1768352825.337:92): prog-id=14 op=LOAD Jan 14 01:07:06.633232 kernel: audit: type=1334 audit(1768352825.337:93): prog-id=4 op=UNLOAD Jan 14 01:07:06.633242 kernel: audit: type=1334 audit(1768352825.337:94): prog-id=5 op=UNLOAD Jan 14 01:07:06.633254 kernel: audit: type=1131 audit(1768352825.340:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.633265 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:07:06.633276 kernel: audit: type=1334 audit(1768352825.408:96): prog-id=12 op=UNLOAD Jan 14 01:07:06.633351 kernel: audit: type=1130 audit(1768352825.409:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.633364 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:07:06.633376 kernel: audit: type=1131 audit(1768352825.409:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.633394 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:07:06.633443 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:07:06.633501 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:07:06.633579 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:07:06.633593 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:07:06.633653 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:07:06.633673 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:07:06.633695 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:07:06.633716 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:07:06.633736 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:07:06.633757 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:07:06.633779 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:07:06.633866 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:07:06.633896 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:07:06.633915 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:07:06.633935 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:07:06.633955 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:07:06.633974 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:07:06.633992 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:07:06.634012 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:07:06.634084 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:07:06.634106 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:07:06.634126 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:07:06.634144 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:07:06.634164 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:07:06.634185 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:07:06.634206 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:07:06.634280 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:07:06.634361 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:07:06.634379 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:07:06.634392 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:07:06.634404 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:07:06.634416 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:07:06.634428 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:07:06.634486 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:07:06.634499 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:07:06.634573 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:07:06.634597 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:07:06.634622 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:07:06.634643 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:07:06.634663 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:07:06.634742 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:07:06.634767 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:07:06.634781 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:07:06.634793 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:07:06.634810 systemd[1]: Reached target machines.target - Containers. Jan 14 01:07:06.634822 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:07:06.634835 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:07:06.634917 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:07:06.634940 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:07:06.634959 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:07:06.634979 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:07:06.634998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:07:06.635018 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:07:06.635094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:07:06.635119 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:07:06.635139 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:07:06.635159 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:07:06.635179 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:07:06.635199 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:07:06.635278 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:07:06.635336 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:07:06.635350 kernel: ACPI: bus type drm_connector registered Jan 14 01:07:06.635361 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:07:06.635373 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:07:06.635421 kernel: fuse: init (API version 7.41) Jan 14 01:07:06.635458 systemd-journald[1237]: Collecting audit messages is enabled. Jan 14 01:07:06.635481 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:07:06.635632 systemd-journald[1237]: Journal started Jan 14 01:07:06.635722 systemd-journald[1237]: Runtime Journal (/run/log/journal/d8fa97e79a7a4094b2c4d5b479025f24) is 6M, max 48M, 42M free. Jan 14 01:07:05.886000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 01:07:06.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.334000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:07:06.334000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:07:06.338000 audit: BPF prog-id=15 op=LOAD Jan 14 01:07:06.339000 audit: BPF prog-id=16 op=LOAD Jan 14 01:07:06.339000 audit: BPF prog-id=17 op=LOAD Jan 14 01:07:06.630000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:07:06.630000 audit[1237]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffcd22f9530 a2=4000 a3=0 items=0 ppid=1 pid=1237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:06.630000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:07:05.308917 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:07:05.338434 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 01:07:05.340339 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:07:05.340963 systemd[1]: systemd-journald.service: Consumed 1.735s CPU time. Jan 14 01:07:06.660639 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:07:06.676619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:07:06.692214 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:07:06.712672 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:07:06.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.724868 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:07:06.730734 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:07:06.736480 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:07:06.741263 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:07:06.746653 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:07:06.752688 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:07:06.758192 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:07:06.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.765378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:07:06.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.772506 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:07:06.772950 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:07:06.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.781798 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:07:06.782168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:07:06.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.789204 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:07:06.790908 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:07:06.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.808462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:07:06.808937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:07:06.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.820150 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:07:06.820605 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:07:06.829087 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:07:06.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.829427 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:07:06.838127 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:07:06.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.849099 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:07:06.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.860090 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:07:06.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.868632 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:07:06.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:06.906618 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:07:06.920081 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:07:06.936040 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:07:06.949471 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:07:06.957711 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:07:06.957771 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:07:06.969775 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:07:06.978700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:07:06.978977 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:07:06.983479 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:07:07.007618 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:07:07.021471 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:07:07.039916 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:07:07.049857 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:07:07.056797 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:07:07.068770 systemd-journald[1237]: Time spent on flushing to /var/log/journal/d8fa97e79a7a4094b2c4d5b479025f24 is 30.221ms for 1200 entries. Jan 14 01:07:07.068770 systemd-journald[1237]: System Journal (/var/log/journal/d8fa97e79a7a4094b2c4d5b479025f24) is 8M, max 163.5M, 155.5M free. Jan 14 01:07:07.244027 systemd-journald[1237]: Received client request to flush runtime journal. Jan 14 01:07:07.244112 kernel: loop1: detected capacity change from 0 to 224512 Jan 14 01:07:07.125161 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:07:07.194893 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:07:07.208268 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:07:07.219392 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:07:07.237764 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:07:07.245882 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:07:07.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.254951 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:07:07.271942 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:07:07.295678 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:07:07.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.317018 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:07:07.356222 kernel: loop2: detected capacity change from 0 to 111560 Jan 14 01:07:07.371702 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 14 01:07:07.371727 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 14 01:07:07.390194 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:07:07.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.411021 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:07:07.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.435943 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:07:07.470582 kernel: loop3: detected capacity change from 0 to 50784 Jan 14 01:07:07.619767 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:07:07.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.631000 audit: BPF prog-id=18 op=LOAD Jan 14 01:07:07.631000 audit: BPF prog-id=19 op=LOAD Jan 14 01:07:07.631000 audit: BPF prog-id=20 op=LOAD Jan 14 01:07:07.633990 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:07:07.638272 kernel: loop4: detected capacity change from 0 to 224512 Jan 14 01:07:07.687000 audit: BPF prog-id=21 op=LOAD Jan 14 01:07:07.708187 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:07:07.721787 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:07:07.730219 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:07:07.734000 audit: BPF prog-id=22 op=LOAD Jan 14 01:07:07.740000 audit: BPF prog-id=23 op=LOAD Jan 14 01:07:07.740000 audit: BPF prog-id=24 op=LOAD Jan 14 01:07:07.744877 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:07:07.756049 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 01:07:07.762000 audit: BPF prog-id=25 op=LOAD Jan 14 01:07:07.762000 audit: BPF prog-id=26 op=LOAD Jan 14 01:07:07.762000 audit: BPF prog-id=27 op=LOAD Jan 14 01:07:07.765208 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:07:07.823357 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 14 01:07:07.823417 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 14 01:07:07.825941 kernel: loop6: detected capacity change from 0 to 50784 Jan 14 01:07:07.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.834681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:07:07.905176 (sd-merge)[1294]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 14 01:07:07.910982 (sd-merge)[1294]: Merged extensions into '/usr'. Jan 14 01:07:07.914206 systemd-nsresourced[1299]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:07:07.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.917797 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:07:07.929029 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:07:07.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:07.941987 systemd[1]: Reload requested from client PID 1271 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:07:07.942014 systemd[1]: Reloading... Jan 14 01:07:09.167608 zram_generator::config[1341]: No configuration found. Jan 14 01:07:09.267015 systemd-oomd[1296]: No swap; memory pressure usage will be degraded Jan 14 01:07:09.335680 systemd-resolved[1297]: Positive Trust Anchors: Jan 14 01:07:09.335748 systemd-resolved[1297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:07:09.335757 systemd-resolved[1297]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:07:09.335804 systemd-resolved[1297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:07:09.353502 systemd-resolved[1297]: Defaulting to hostname 'linux'. Jan 14 01:07:09.892508 systemd[1]: Reloading finished in 1949 ms. Jan 14 01:07:10.238167 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:07:10.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:10.253420 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:07:10.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:10.265508 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:07:10.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:10.296891 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:07:10.337364 systemd[1]: Starting ensure-sysext.service... Jan 14 01:07:10.354193 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:07:10.375000 audit: BPF prog-id=28 op=LOAD Jan 14 01:07:10.389469 kernel: kauditd_printk_skb: 53 callbacks suppressed Jan 14 01:07:10.389762 kernel: audit: type=1334 audit(1768352830.375:150): prog-id=28 op=LOAD Jan 14 01:07:10.406000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:07:10.417252 kernel: audit: type=1334 audit(1768352830.406:151): prog-id=18 op=UNLOAD Jan 14 01:07:10.426738 kernel: audit: type=1334 audit(1768352830.409:152): prog-id=29 op=LOAD Jan 14 01:07:10.409000 audit: BPF prog-id=29 op=LOAD Jan 14 01:07:10.409000 audit: BPF prog-id=30 op=LOAD Jan 14 01:07:10.409000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:07:10.443216 kernel: audit: type=1334 audit(1768352830.409:153): prog-id=30 op=LOAD Jan 14 01:07:10.444816 kernel: audit: type=1334 audit(1768352830.409:154): prog-id=19 op=UNLOAD Jan 14 01:07:10.444992 kernel: audit: type=1334 audit(1768352830.409:155): prog-id=20 op=UNLOAD Jan 14 01:07:10.445092 kernel: audit: type=1334 audit(1768352830.415:156): prog-id=31 op=LOAD Jan 14 01:07:10.409000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:07:10.415000 audit: BPF prog-id=31 op=LOAD Jan 14 01:07:10.457937 kernel: audit: type=1334 audit(1768352830.415:157): prog-id=21 op=UNLOAD Jan 14 01:07:10.475206 kernel: audit: type=1334 audit(1768352830.416:158): prog-id=32 op=LOAD Jan 14 01:07:10.475409 kernel: audit: type=1334 audit(1768352830.416:159): prog-id=22 op=UNLOAD Jan 14 01:07:10.415000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:07:10.416000 audit: BPF prog-id=32 op=LOAD Jan 14 01:07:10.416000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:07:10.416000 audit: BPF prog-id=33 op=LOAD Jan 14 01:07:10.416000 audit: BPF prog-id=34 op=LOAD Jan 14 01:07:10.416000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:07:10.416000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:07:10.418000 audit: BPF prog-id=35 op=LOAD Jan 14 01:07:10.419000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:07:10.420000 audit: BPF prog-id=36 op=LOAD Jan 14 01:07:10.420000 audit: BPF prog-id=37 op=LOAD Jan 14 01:07:10.420000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:07:10.420000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:07:10.432000 audit: BPF prog-id=38 op=LOAD Jan 14 01:07:10.433000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:07:10.446000 audit: BPF prog-id=39 op=LOAD Jan 14 01:07:10.447000 audit: BPF prog-id=40 op=LOAD Jan 14 01:07:10.447000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:07:10.447000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:07:10.521081 systemd[1]: Reload requested from client PID 1380 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:07:10.521102 systemd[1]: Reloading... Jan 14 01:07:10.561399 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:07:10.561497 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:07:10.562125 systemd-tmpfiles[1381]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:07:10.564696 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jan 14 01:07:10.564801 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jan 14 01:07:10.579932 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:07:10.579946 systemd-tmpfiles[1381]: Skipping /boot Jan 14 01:07:10.618956 systemd-tmpfiles[1381]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:07:10.619180 systemd-tmpfiles[1381]: Skipping /boot Jan 14 01:07:10.722642 zram_generator::config[1422]: No configuration found. Jan 14 01:07:11.418973 systemd[1]: Reloading finished in 896 ms. Jan 14 01:07:11.618766 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:07:11.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:11.633000 audit: BPF prog-id=41 op=LOAD Jan 14 01:07:11.633000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:07:11.633000 audit: BPF prog-id=42 op=LOAD Jan 14 01:07:11.634000 audit: BPF prog-id=43 op=LOAD Jan 14 01:07:11.634000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:07:11.634000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:07:11.635000 audit: BPF prog-id=44 op=LOAD Jan 14 01:07:11.635000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:07:11.635000 audit: BPF prog-id=45 op=LOAD Jan 14 01:07:11.635000 audit: BPF prog-id=46 op=LOAD Jan 14 01:07:11.635000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:07:11.635000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:07:11.642000 audit: BPF prog-id=47 op=LOAD Jan 14 01:07:11.643000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:07:11.645000 audit: BPF prog-id=48 op=LOAD Jan 14 01:07:11.645000 audit: BPF prog-id=49 op=LOAD Jan 14 01:07:11.645000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:07:11.645000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:07:11.647000 audit: BPF prog-id=50 op=LOAD Jan 14 01:07:11.647000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:07:11.652000 audit: BPF prog-id=51 op=LOAD Jan 14 01:07:11.652000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:07:11.653000 audit: BPF prog-id=52 op=LOAD Jan 14 01:07:11.653000 audit: BPF prog-id=53 op=LOAD Jan 14 01:07:11.653000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:07:11.653000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:07:11.680251 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:07:11.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:11.714047 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:07:11.722237 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:07:11.731217 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:07:11.755788 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:07:11.763000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:07:11.763000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:07:11.764000 audit: BPF prog-id=54 op=LOAD Jan 14 01:07:11.764000 audit: BPF prog-id=55 op=LOAD Jan 14 01:07:11.767921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:07:11.770951 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:07:11.804069 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:07:11.804388 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:07:11.806885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:07:11.816971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:07:11.827054 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:07:11.832998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:07:11.833256 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:07:11.833449 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:07:11.833709 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:07:11.838123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:07:11.838440 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:07:11.838824 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:07:11.839027 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:07:11.839000 audit[1457]: SYSTEM_BOOT pid=1457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 01:07:11.839194 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:07:11.839370 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:07:11.852003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:07:11.852304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:07:11.856932 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:07:11.864146 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:07:11.864463 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:07:11.864730 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:07:11.864900 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:07:11.868655 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:07:11.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:11.875242 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:07:11.875732 systemd-udevd[1456]: Using default interface naming scheme 'v257'. Jan 14 01:07:11.875855 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:07:11.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:11.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:11.963979 systemd[1]: Finished ensure-sysext.service. Jan 14 01:07:11.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:11.970827 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:07:11.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:12.006811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:07:12.007203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:07:12.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:12.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:12.024102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:07:12.024686 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:07:12.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:12.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:12.043766 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:07:12.048875 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:07:12.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:12.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:12.064695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:07:12.064854 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:07:12.069000 audit: BPF prog-id=56 op=LOAD Jan 14 01:07:12.073135 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 01:07:12.106000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:07:12.106000 audit[1489]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf7efe770 a2=420 a3=0 items=0 ppid=1452 pid=1489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:12.106000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:07:12.109619 augenrules[1489]: No rules Jan 14 01:07:12.111912 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:07:12.112664 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:07:12.155072 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:07:12.170781 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:07:12.323030 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:07:12.343485 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:07:12.470836 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 01:07:12.490225 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:07:12.725783 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:07:12.987688 systemd-networkd[1504]: lo: Link UP Jan 14 01:07:12.987703 systemd-networkd[1504]: lo: Gained carrier Jan 14 01:07:12.994478 systemd-networkd[1504]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:07:12.994486 systemd-networkd[1504]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:07:13.004785 systemd-networkd[1504]: eth0: Link UP Jan 14 01:07:13.010118 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:07:13.011923 systemd-networkd[1504]: eth0: Gained carrier Jan 14 01:07:13.012001 systemd-networkd[1504]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:07:13.020186 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:07:13.048225 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:07:13.041597 systemd[1]: Reached target network.target - Network. Jan 14 01:07:13.054408 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:07:13.063124 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:07:13.063727 systemd-networkd[1504]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 14 01:07:13.066469 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Jan 14 01:07:13.071030 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:07:14.703268 systemd-timesyncd[1487]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 14 01:07:14.703375 systemd-timesyncd[1487]: Initial clock synchronization to Wed 2026-01-14 01:07:14.702588 UTC. Jan 14 01:07:14.703483 systemd-resolved[1297]: Clock change detected. Flushing caches. Jan 14 01:07:14.739117 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 14 01:07:14.749016 kernel: ACPI: button: Power Button [PWRF] Jan 14 01:07:14.765533 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:07:14.790206 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:07:15.972970 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:07:15.996407 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:07:15.996871 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:07:16.006311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:07:16.059047 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 14 01:07:16.080178 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 01:07:16.097567 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 01:07:16.186624 kernel: kvm_amd: TSC scaling supported Jan 14 01:07:16.186794 kernel: kvm_amd: Nested Virtualization enabled Jan 14 01:07:16.186822 kernel: kvm_amd: Nested Paging enabled Jan 14 01:07:16.189685 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 14 01:07:16.196880 kernel: kvm_amd: PMU virtualization is disabled Jan 14 01:07:16.328813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:07:16.412197 kernel: EDAC MC: Ver: 3.0.0 Jan 14 01:07:16.454835 ldconfig[1454]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:07:16.466796 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:07:16.476532 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:07:16.533261 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:07:16.543033 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:07:16.550860 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:07:16.558001 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:07:16.564601 systemd-networkd[1504]: eth0: Gained IPv6LL Jan 14 01:07:16.567600 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:07:16.574384 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:07:16.580489 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:07:16.587641 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:07:16.594754 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:07:16.601197 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:07:16.609433 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:07:16.609532 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:07:16.615233 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:07:16.621336 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:07:16.631862 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:07:16.641462 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:07:16.648668 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:07:16.657420 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:07:16.675675 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:07:16.685633 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:07:16.699406 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:07:16.707305 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:07:16.720343 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:07:16.762226 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:07:16.809975 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:07:16.855759 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:07:16.855871 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:07:16.858851 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:07:16.871173 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 14 01:07:16.887484 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:07:16.894417 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:07:16.906176 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:07:16.915589 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:07:16.923345 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:07:16.940314 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:07:16.943468 jq[1569]: false Jan 14 01:07:16.951315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:07:16.954229 extend-filesystems[1570]: Found /dev/vda6 Jan 14 01:07:16.963856 extend-filesystems[1570]: Found /dev/vda9 Jan 14 01:07:16.969222 extend-filesystems[1570]: Checking size of /dev/vda9 Jan 14 01:07:16.978491 oslogin_cache_refresh[1571]: Refreshing passwd entry cache Jan 14 01:07:16.972352 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:07:16.982508 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing passwd entry cache Jan 14 01:07:16.988441 extend-filesystems[1570]: Resized partition /dev/vda9 Jan 14 01:07:16.994259 extend-filesystems[1584]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:07:16.995825 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:07:17.010813 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting users, quitting Jan 14 01:07:17.010813 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:07:17.010813 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing group entry cache Jan 14 01:07:17.010165 oslogin_cache_refresh[1571]: Failure getting users, quitting Jan 14 01:07:17.010235 oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:07:17.010291 oslogin_cache_refresh[1571]: Refreshing group entry cache Jan 14 01:07:17.013446 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:07:17.022999 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 14 01:07:17.023485 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:07:17.034191 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting groups, quitting Jan 14 01:07:17.034191 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:07:17.033983 oslogin_cache_refresh[1571]: Failure getting groups, quitting Jan 14 01:07:17.034007 oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:07:17.037145 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:07:17.050471 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:07:17.056580 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 01:07:17.057439 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:07:17.061259 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:07:17.081245 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:07:17.098086 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:07:17.107583 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:07:17.119638 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 14 01:07:17.109283 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:07:17.165037 jq[1603]: true Jan 14 01:07:17.109818 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:07:17.165401 update_engine[1597]: I20260114 01:07:17.137619 1597 main.cc:92] Flatcar Update Engine starting Jan 14 01:07:17.110413 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:07:17.127126 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:07:17.127569 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:07:17.166869 extend-filesystems[1584]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 01:07:17.166869 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 14 01:07:17.166869 extend-filesystems[1584]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 14 01:07:17.133610 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:07:17.196050 extend-filesystems[1570]: Resized filesystem in /dev/vda9 Jan 14 01:07:17.149849 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:07:17.151305 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:07:17.208395 jq[1617]: true Jan 14 01:07:17.170117 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:07:17.171103 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:07:17.211155 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 14 01:07:17.211614 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 14 01:07:17.267220 tar[1612]: linux-amd64/LICENSE Jan 14 01:07:17.267555 tar[1612]: linux-amd64/helm Jan 14 01:07:17.284607 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:07:17.314355 systemd-logind[1593]: Watching system buttons on /dev/input/event2 (Power Button) Jan 14 01:07:17.314399 systemd-logind[1593]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 01:07:17.318209 systemd-logind[1593]: New seat seat0. Jan 14 01:07:17.319627 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:07:17.346797 bash[1653]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:07:17.351434 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:07:17.359682 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 01:07:17.363505 dbus-daemon[1567]: [system] SELinux support is enabled Jan 14 01:07:17.364189 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:07:17.373370 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:07:17.373412 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:07:17.383255 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:07:17.383334 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:07:17.395459 dbus-daemon[1567]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 01:07:17.397226 update_engine[1597]: I20260114 01:07:17.396683 1597 update_check_scheduler.cc:74] Next update check in 10m57s Jan 14 01:07:17.397821 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:07:17.408255 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:07:17.501313 locksmithd[1655]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:07:17.628242 containerd[1618]: time="2026-01-14T01:07:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:07:17.628242 containerd[1618]: time="2026-01-14T01:07:17.627543081Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:07:17.649555 containerd[1618]: time="2026-01-14T01:07:17.648272084Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.57µs" Jan 14 01:07:17.649555 containerd[1618]: time="2026-01-14T01:07:17.648303754Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:07:17.649555 containerd[1618]: time="2026-01-14T01:07:17.648359237Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:07:17.649555 containerd[1618]: time="2026-01-14T01:07:17.648373564Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:07:17.649555 containerd[1618]: time="2026-01-14T01:07:17.648590128Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:07:17.649555 containerd[1618]: time="2026-01-14T01:07:17.648618721Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:07:17.649555 containerd[1618]: time="2026-01-14T01:07:17.648771998Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:07:17.649555 containerd[1618]: time="2026-01-14T01:07:17.648789831Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:07:17.652469 containerd[1618]: time="2026-01-14T01:07:17.652378991Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:07:17.652469 containerd[1618]: time="2026-01-14T01:07:17.652403287Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:07:17.652549 containerd[1618]: time="2026-01-14T01:07:17.652419287Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:07:17.652581 containerd[1618]: time="2026-01-14T01:07:17.652557845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:07:17.656475 containerd[1618]: time="2026-01-14T01:07:17.656322744Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:07:17.656529 containerd[1618]: time="2026-01-14T01:07:17.656512359Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:07:17.657586 containerd[1618]: time="2026-01-14T01:07:17.656790799Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:07:17.657586 containerd[1618]: time="2026-01-14T01:07:17.657376351Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:07:17.657586 containerd[1618]: time="2026-01-14T01:07:17.657426535Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:07:17.657586 containerd[1618]: time="2026-01-14T01:07:17.657444559Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:07:17.657586 containerd[1618]: time="2026-01-14T01:07:17.657489623Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:07:17.657806 containerd[1618]: time="2026-01-14T01:07:17.657774435Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:07:17.659367 containerd[1618]: time="2026-01-14T01:07:17.658138845Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.676701407Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.676874300Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677062241Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677081867Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677111813Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677129957Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677143513Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677156406Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677170262Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677183587Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677195670Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677208995Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677219855Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:07:17.679874 containerd[1618]: time="2026-01-14T01:07:17.677245292Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677403478Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677429176Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677444825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677457288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677469551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677481293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677494177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677516769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677532649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677548859Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677571441Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677598852Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677647323Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677662441Z" level=info msg="Start snapshots syncer" Jan 14 01:07:17.680401 containerd[1618]: time="2026-01-14T01:07:17.677773027Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:07:17.680992 containerd[1618]: time="2026-01-14T01:07:17.678173726Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:07:17.680992 containerd[1618]: time="2026-01-14T01:07:17.678225933Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678310351Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678482573Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678509572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678539809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678557692Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678576067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678589382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678601584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678614148Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678627543Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678696813Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678775880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:07:17.681260 containerd[1618]: time="2026-01-14T01:07:17.678791058Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:07:17.681586 containerd[1618]: time="2026-01-14T01:07:17.678804694Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:07:17.681586 containerd[1618]: time="2026-01-14T01:07:17.678817347Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:07:17.681586 containerd[1618]: time="2026-01-14T01:07:17.678831153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:07:17.681586 containerd[1618]: time="2026-01-14T01:07:17.678846762Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:07:17.681586 containerd[1618]: time="2026-01-14T01:07:17.678863043Z" level=info msg="runtime interface created" Jan 14 01:07:17.681586 containerd[1618]: time="2026-01-14T01:07:17.678873492Z" level=info msg="created NRI interface" Jan 14 01:07:17.681586 containerd[1618]: time="2026-01-14T01:07:17.679032569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:07:17.681586 containerd[1618]: time="2026-01-14T01:07:17.679167280Z" level=info msg="Connect containerd service" Jan 14 01:07:17.681586 containerd[1618]: time="2026-01-14T01:07:17.679312843Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:07:17.686012 sshd_keygen[1615]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:07:17.688417 containerd[1618]: time="2026-01-14T01:07:17.686806582Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:07:17.751389 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:07:17.766790 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:07:17.809430 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:07:17.810423 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:07:17.822803 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:07:17.871180 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:07:17.872543 tar[1612]: linux-amd64/README.md Jan 14 01:07:17.913004 containerd[1618]: time="2026-01-14T01:07:17.910505734Z" level=info msg="Start subscribing containerd event" Jan 14 01:07:17.914758 containerd[1618]: time="2026-01-14T01:07:17.911080356Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:07:17.914758 containerd[1618]: time="2026-01-14T01:07:17.914320646Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:07:17.914758 containerd[1618]: time="2026-01-14T01:07:17.914576074Z" level=info msg="Start recovering state" Jan 14 01:07:17.914881 containerd[1618]: time="2026-01-14T01:07:17.914762071Z" level=info msg="Start event monitor" Jan 14 01:07:17.914881 containerd[1618]: time="2026-01-14T01:07:17.914789372Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:07:17.914881 containerd[1618]: time="2026-01-14T01:07:17.914799692Z" level=info msg="Start streaming server" Jan 14 01:07:17.914881 containerd[1618]: time="2026-01-14T01:07:17.914810983Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:07:17.914881 containerd[1618]: time="2026-01-14T01:07:17.914825340Z" level=info msg="runtime interface starting up..." Jan 14 01:07:17.914881 containerd[1618]: time="2026-01-14T01:07:17.914834517Z" level=info msg="starting plugins..." Jan 14 01:07:17.914881 containerd[1618]: time="2026-01-14T01:07:17.914853683Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:07:17.918997 containerd[1618]: time="2026-01-14T01:07:17.917351606Z" level=info msg="containerd successfully booted in 0.292528s" Jan 14 01:07:17.921150 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:07:17.932651 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:07:17.940266 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:07:17.949110 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:07:17.958247 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:07:19.900193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:07:19.906855 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:07:19.915392 systemd[1]: Startup finished in 7.779s (kernel) + 15.867s (initrd) + 14.824s (userspace) = 38.471s. Jan 14 01:07:19.934616 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:07:20.767492 kubelet[1706]: E0114 01:07:20.767226 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:07:20.771310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:07:20.771552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:07:20.772333 systemd[1]: kubelet.service: Consumed 1.523s CPU time, 266.3M memory peak. Jan 14 01:07:25.523694 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:07:25.526817 systemd[1]: Started sshd@0-10.0.0.105:22-10.0.0.1:56436.service - OpenSSH per-connection server daemon (10.0.0.1:56436). Jan 14 01:07:25.728143 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 56436 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:07:25.733062 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:25.750501 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:07:25.752821 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:07:25.768419 systemd-logind[1593]: New session 1 of user core. Jan 14 01:07:25.800035 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:07:25.806691 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:07:25.839842 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:25.855106 systemd-logind[1593]: New session 2 of user core. Jan 14 01:07:26.086522 systemd[1726]: Queued start job for default target default.target. Jan 14 01:07:26.100603 systemd[1726]: Created slice app.slice - User Application Slice. Jan 14 01:07:26.100690 systemd[1726]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:07:26.100708 systemd[1726]: Reached target paths.target - Paths. Jan 14 01:07:26.100860 systemd[1726]: Reached target timers.target - Timers. Jan 14 01:07:26.104818 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:07:26.110567 systemd[1726]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:07:26.158426 systemd[1726]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:07:26.160310 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:07:26.162594 systemd[1726]: Reached target sockets.target - Sockets. Jan 14 01:07:26.162815 systemd[1726]: Reached target basic.target - Basic System. Jan 14 01:07:26.163057 systemd[1726]: Reached target default.target - Main User Target. Jan 14 01:07:26.163129 systemd[1726]: Startup finished in 293ms. Jan 14 01:07:26.163209 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:07:26.171571 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:07:26.211318 systemd[1]: Started sshd@1-10.0.0.105:22-10.0.0.1:56452.service - OpenSSH per-connection server daemon (10.0.0.1:56452). Jan 14 01:07:26.316277 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 56452 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:07:26.321241 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:26.334015 systemd-logind[1593]: New session 3 of user core. Jan 14 01:07:26.347261 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 01:07:26.376024 sshd[1744]: Connection closed by 10.0.0.1 port 56452 Jan 14 01:07:26.376486 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:26.399212 systemd[1]: sshd@1-10.0.0.105:22-10.0.0.1:56452.service: Deactivated successfully. Jan 14 01:07:26.402581 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 01:07:26.404726 systemd-logind[1593]: Session 3 logged out. Waiting for processes to exit. Jan 14 01:07:26.409372 systemd[1]: Started sshd@2-10.0.0.105:22-10.0.0.1:56466.service - OpenSSH per-connection server daemon (10.0.0.1:56466). Jan 14 01:07:26.411016 systemd-logind[1593]: Removed session 3. Jan 14 01:07:26.506845 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 56466 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:07:26.509617 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:26.531266 systemd-logind[1593]: New session 4 of user core. Jan 14 01:07:26.538230 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:07:26.564563 sshd[1755]: Connection closed by 10.0.0.1 port 56466 Jan 14 01:07:26.565383 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:26.590613 systemd[1]: sshd@2-10.0.0.105:22-10.0.0.1:56466.service: Deactivated successfully. Jan 14 01:07:26.594710 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 01:07:26.601005 systemd-logind[1593]: Session 4 logged out. Waiting for processes to exit. Jan 14 01:07:26.607337 systemd[1]: Started sshd@3-10.0.0.105:22-10.0.0.1:56472.service - OpenSSH per-connection server daemon (10.0.0.1:56472). Jan 14 01:07:26.610225 systemd-logind[1593]: Removed session 4. Jan 14 01:07:26.712197 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 56472 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:07:26.716449 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:26.735647 systemd-logind[1593]: New session 5 of user core. Jan 14 01:07:26.763243 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:07:26.800121 sshd[1765]: Connection closed by 10.0.0.1 port 56472 Jan 14 01:07:26.802662 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:26.818120 systemd[1]: sshd@3-10.0.0.105:22-10.0.0.1:56472.service: Deactivated successfully. Jan 14 01:07:26.821360 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:07:26.824214 systemd-logind[1593]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:07:26.829461 systemd[1]: Started sshd@4-10.0.0.105:22-10.0.0.1:56480.service - OpenSSH per-connection server daemon (10.0.0.1:56480). Jan 14 01:07:26.832204 systemd-logind[1593]: Removed session 5. Jan 14 01:07:26.928445 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 56480 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:07:26.932285 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:26.952612 systemd-logind[1593]: New session 6 of user core. Jan 14 01:07:26.965413 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:07:27.030143 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:07:27.030655 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:07:27.062399 sudo[1777]: pam_unix(sudo:session): session closed for user root Jan 14 01:07:27.070400 sshd[1776]: Connection closed by 10.0.0.1 port 56480 Jan 14 01:07:27.070651 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:27.094045 systemd[1]: sshd@4-10.0.0.105:22-10.0.0.1:56480.service: Deactivated successfully. Jan 14 01:07:27.098490 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:07:27.104460 systemd-logind[1593]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:07:27.108423 systemd[1]: Started sshd@5-10.0.0.105:22-10.0.0.1:56482.service - OpenSSH per-connection server daemon (10.0.0.1:56482). Jan 14 01:07:27.110050 systemd-logind[1593]: Removed session 6. Jan 14 01:07:27.212101 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 56482 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:07:27.217739 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:27.232520 systemd-logind[1593]: New session 7 of user core. Jan 14 01:07:27.252729 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:07:27.280706 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:07:27.281356 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:07:27.289084 sudo[1790]: pam_unix(sudo:session): session closed for user root Jan 14 01:07:27.303170 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:07:27.303679 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:07:27.326062 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:07:27.416984 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 14 01:07:27.417107 kernel: audit: type=1305 audit(1768352847.412:222): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:07:27.412000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:07:27.418829 augenrules[1814]: No rules Jan 14 01:07:27.422379 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:07:27.422880 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:07:27.429433 sudo[1789]: pam_unix(sudo:session): session closed for user root Jan 14 01:07:27.438000 kernel: audit: type=1300 audit(1768352847.412:222): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe35852310 a2=420 a3=0 items=0 ppid=1795 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:27.412000 audit[1814]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe35852310 a2=420 a3=0 items=0 ppid=1795 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:27.438164 sshd[1788]: Connection closed by 10.0.0.1 port 56482 Jan 14 01:07:27.432649 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 14 01:07:27.450056 kernel: audit: type=1327 audit(1768352847.412:222): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:07:27.412000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:07:27.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.473477 kernel: audit: type=1130 audit(1768352847.422:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.473581 kernel: audit: type=1131 audit(1768352847.422:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.491469 kernel: audit: type=1106 audit(1768352847.427:225): pid=1789 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.427000 audit[1789]: USER_END pid=1789 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.495390 systemd[1]: sshd@5-10.0.0.105:22-10.0.0.1:56482.service: Deactivated successfully. Jan 14 01:07:27.498377 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:07:27.501114 systemd-logind[1593]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:07:27.505813 systemd[1]: Started sshd@6-10.0.0.105:22-10.0.0.1:56494.service - OpenSSH per-connection server daemon (10.0.0.1:56494). Jan 14 01:07:27.427000 audit[1789]: CRED_DISP pid=1789 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.512217 systemd-logind[1593]: Removed session 7. Jan 14 01:07:27.524684 kernel: audit: type=1104 audit(1768352847.427:226): pid=1789 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.524820 kernel: audit: type=1106 audit(1768352847.433:227): pid=1784 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:07:27.433000 audit[1784]: USER_END pid=1784 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:07:27.549505 kernel: audit: type=1104 audit(1768352847.433:228): pid=1784 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:07:27.433000 audit[1784]: CRED_DISP pid=1784 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:07:27.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.105:22-10.0.0.1:56482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.580243 kernel: audit: type=1131 audit(1768352847.489:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.105:22-10.0.0.1:56482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.105:22-10.0.0.1:56494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.597000 audit[1823]: USER_ACCT pid=1823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:07:27.598554 sshd[1823]: Accepted publickey for core from 10.0.0.1 port 56494 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:07:27.601000 audit[1823]: CRED_ACQ pid=1823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:07:27.601000 audit[1823]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdccd3f370 a2=3 a3=0 items=0 ppid=1 pid=1823 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:27.601000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:07:27.602700 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:07:27.614615 systemd-logind[1593]: New session 8 of user core. Jan 14 01:07:27.626697 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:07:27.636000 audit[1823]: USER_START pid=1823 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:07:27.644000 audit[1827]: CRED_ACQ pid=1827 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:07:27.666000 audit[1828]: USER_ACCT pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.667435 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:07:27.666000 audit[1828]: CRED_REFR pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.669000 audit[1828]: USER_START pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:07:27.670152 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:07:28.409242 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:07:28.432425 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:07:28.933504 dockerd[1849]: time="2026-01-14T01:07:28.933392993Z" level=info msg="Starting up" Jan 14 01:07:28.936048 dockerd[1849]: time="2026-01-14T01:07:28.935698268Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:07:28.965954 dockerd[1849]: time="2026-01-14T01:07:28.963878285Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:07:29.071490 dockerd[1849]: time="2026-01-14T01:07:29.071295950Z" level=info msg="Loading containers: start." Jan 14 01:07:29.088994 kernel: Initializing XFRM netlink socket Jan 14 01:07:29.270000 audit[1902]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.270000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffeaf165af0 a2=0 a3=0 items=0 ppid=1849 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:07:29.277000 audit[1904]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.277000 audit[1904]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffea2c42550 a2=0 a3=0 items=0 ppid=1849 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.277000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:07:29.285000 audit[1906]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.285000 audit[1906]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7c07d920 a2=0 a3=0 items=0 ppid=1849 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.285000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:07:29.292000 audit[1908]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.292000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffca614960 a2=0 a3=0 items=0 ppid=1849 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:07:29.299000 audit[1910]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.299000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcdf509c80 a2=0 a3=0 items=0 ppid=1849 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:07:29.308000 audit[1912]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.308000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc7e8755b0 a2=0 a3=0 items=0 ppid=1849 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.308000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:07:29.317000 audit[1914]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.317000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd50938e00 a2=0 a3=0 items=0 ppid=1849 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.317000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:07:29.325000 audit[1916]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.325000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc4e9aa6a0 a2=0 a3=0 items=0 ppid=1849 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.325000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:07:29.396000 audit[1919]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.396000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdbabe33d0 a2=0 a3=0 items=0 ppid=1849 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.396000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:07:29.404000 audit[1921]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.404000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffee00be0d0 a2=0 a3=0 items=0 ppid=1849 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:07:29.410000 audit[1923]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.410000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc3a24fc00 a2=0 a3=0 items=0 ppid=1849 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.410000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:07:29.417000 audit[1925]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.417000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd017b1580 a2=0 a3=0 items=0 ppid=1849 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.417000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:07:29.425000 audit[1927]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.425000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd358b3580 a2=0 a3=0 items=0 ppid=1849 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.425000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:07:29.583000 audit[1957]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.583000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdc3c60f00 a2=0 a3=0 items=0 ppid=1849 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.583000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:07:29.590000 audit[1959]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.590000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff9ba62620 a2=0 a3=0 items=0 ppid=1849 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:07:29.596000 audit[1961]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.596000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe16166f50 a2=0 a3=0 items=0 ppid=1849 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.596000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:07:29.607000 audit[1963]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.607000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd3f8fb80 a2=0 a3=0 items=0 ppid=1849 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.607000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:07:29.618000 audit[1965]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.618000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffff4c80630 a2=0 a3=0 items=0 ppid=1849 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.618000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:07:29.630000 audit[1967]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.630000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd4d01c6d0 a2=0 a3=0 items=0 ppid=1849 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.630000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:07:29.637000 audit[1969]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.637000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffc22eb100 a2=0 a3=0 items=0 ppid=1849 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.637000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:07:29.646000 audit[1971]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.646000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff6f4bf8f0 a2=0 a3=0 items=0 ppid=1849 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.646000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:07:29.657000 audit[1973]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.657000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffdc0838d00 a2=0 a3=0 items=0 ppid=1849 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:07:29.664000 audit[1975]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.664000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff9c779710 a2=0 a3=0 items=0 ppid=1849 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:07:29.673000 audit[1977]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.673000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcd5bbd840 a2=0 a3=0 items=0 ppid=1849 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.673000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:07:29.679000 audit[1979]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.679000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdfae956e0 a2=0 a3=0 items=0 ppid=1849 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.679000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:07:29.685000 audit[1981]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.685000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe048d8ba0 a2=0 a3=0 items=0 ppid=1849 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.685000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:07:29.707000 audit[1986]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.707000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf41e5890 a2=0 a3=0 items=0 ppid=1849 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:07:29.719000 audit[1988]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.719000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd81497350 a2=0 a3=0 items=0 ppid=1849 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.719000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:07:29.729000 audit[1990]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.729000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdce2a99c0 a2=0 a3=0 items=0 ppid=1849 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.729000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:07:29.747000 audit[1992]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.747000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff9f3c34e0 a2=0 a3=0 items=0 ppid=1849 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.747000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:07:29.756000 audit[1994]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.756000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe203c9310 a2=0 a3=0 items=0 ppid=1849 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:07:29.768000 audit[1996]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:07:29.768000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd45e259b0 a2=0 a3=0 items=0 ppid=1849 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.768000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:07:29.818000 audit[2001]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.818000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffe25088a30 a2=0 a3=0 items=0 ppid=1849 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.818000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:07:29.828000 audit[2003]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.828000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff9ea9b970 a2=0 a3=0 items=0 ppid=1849 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.828000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:07:29.874000 audit[2011]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.874000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc3cf2aa40 a2=0 a3=0 items=0 ppid=1849 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:07:29.914000 audit[2017]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.914000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff00055dd0 a2=0 a3=0 items=0 ppid=1849 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:07:29.924000 audit[2019]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.924000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd3179eff0 a2=0 a3=0 items=0 ppid=1849 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.924000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:07:29.931000 audit[2021]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.931000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd0b4bcd60 a2=0 a3=0 items=0 ppid=1849 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.931000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:07:29.944000 audit[2023]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.944000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff78946690 a2=0 a3=0 items=0 ppid=1849 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.944000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:07:29.961000 audit[2025]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:07:29.961000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcf41ba560 a2=0 a3=0 items=0 ppid=1849 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:07:29.961000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:07:29.968539 systemd-networkd[1504]: docker0: Link UP Jan 14 01:07:29.988951 dockerd[1849]: time="2026-01-14T01:07:29.988653390Z" level=info msg="Loading containers: done." Jan 14 01:07:30.021208 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2555704859-merged.mount: Deactivated successfully. Jan 14 01:07:30.032863 dockerd[1849]: time="2026-01-14T01:07:30.029095263Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:07:30.032863 dockerd[1849]: time="2026-01-14T01:07:30.029225987Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:07:30.032863 dockerd[1849]: time="2026-01-14T01:07:30.029338006Z" level=info msg="Initializing buildkit" Jan 14 01:07:30.111293 dockerd[1849]: time="2026-01-14T01:07:30.111097533Z" level=info msg="Completed buildkit initialization" Jan 14 01:07:30.120474 dockerd[1849]: time="2026-01-14T01:07:30.120313709Z" level=info msg="Daemon has completed initialization" Jan 14 01:07:30.120667 dockerd[1849]: time="2026-01-14T01:07:30.120548026Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:07:30.121008 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:07:30.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:30.897238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:07:30.901179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:07:31.296729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:07:31.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:31.341111 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:07:31.378234 containerd[1618]: time="2026-01-14T01:07:31.377471474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 14 01:07:32.063038 kubelet[2075]: E0114 01:07:32.062865 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:07:32.070158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:07:32.070365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:07:32.071142 systemd[1]: kubelet.service: Consumed 1.111s CPU time, 111.7M memory peak. Jan 14 01:07:32.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:32.693266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582516362.mount: Deactivated successfully. Jan 14 01:07:33.825196 containerd[1618]: time="2026-01-14T01:07:33.824736993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:33.828463 containerd[1618]: time="2026-01-14T01:07:33.828383351Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27433143" Jan 14 01:07:33.830213 containerd[1618]: time="2026-01-14T01:07:33.830025456Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:33.834672 containerd[1618]: time="2026-01-14T01:07:33.834608358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:33.836197 containerd[1618]: time="2026-01-14T01:07:33.836120736Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.458604088s" Jan 14 01:07:33.836197 containerd[1618]: time="2026-01-14T01:07:33.836174306Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 14 01:07:33.837872 containerd[1618]: time="2026-01-14T01:07:33.837695111Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 14 01:07:35.398647 containerd[1618]: time="2026-01-14T01:07:35.398495957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:35.400003 containerd[1618]: time="2026-01-14T01:07:35.399957009Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 14 01:07:35.402154 containerd[1618]: time="2026-01-14T01:07:35.402062445Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:35.406330 containerd[1618]: time="2026-01-14T01:07:35.406220172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:35.407720 containerd[1618]: time="2026-01-14T01:07:35.407617501Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.569864011s" Jan 14 01:07:35.407720 containerd[1618]: time="2026-01-14T01:07:35.407676271Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 14 01:07:35.409048 containerd[1618]: time="2026-01-14T01:07:35.408866366Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 14 01:07:38.055569 containerd[1618]: time="2026-01-14T01:07:38.055138716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:38.057404 containerd[1618]: time="2026-01-14T01:07:38.056279346Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 14 01:07:38.059577 containerd[1618]: time="2026-01-14T01:07:38.059431260Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:38.063650 containerd[1618]: time="2026-01-14T01:07:38.063536408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:38.064970 containerd[1618]: time="2026-01-14T01:07:38.064785509Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.655790292s" Jan 14 01:07:38.065080 containerd[1618]: time="2026-01-14T01:07:38.064971947Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 14 01:07:38.066676 containerd[1618]: time="2026-01-14T01:07:38.066594998Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 14 01:07:41.590483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985403367.mount: Deactivated successfully. Jan 14 01:07:42.189970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 01:07:42.209858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:07:43.059233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:07:43.075240 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 14 01:07:43.076331 kernel: audit: type=1130 audit(1768352863.058:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:43.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:43.356671 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:07:43.531152 kubelet[2169]: E0114 01:07:43.530329 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:07:43.540196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:07:43.540471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:07:43.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:43.541430 systemd[1]: kubelet.service: Consumed 878ms CPU time, 109M memory peak. Jan 14 01:07:43.551142 kernel: audit: type=1131 audit(1768352863.540:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:44.761011 containerd[1618]: time="2026-01-14T01:07:44.760549958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:44.762783 containerd[1618]: time="2026-01-14T01:07:44.762485104Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 14 01:07:44.764429 containerd[1618]: time="2026-01-14T01:07:44.764335168Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:44.767408 containerd[1618]: time="2026-01-14T01:07:44.767320652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:44.768298 containerd[1618]: time="2026-01-14T01:07:44.768200309Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 6.701535661s" Jan 14 01:07:44.768298 containerd[1618]: time="2026-01-14T01:07:44.768272884Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 14 01:07:44.770108 containerd[1618]: time="2026-01-14T01:07:44.770026433Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 14 01:07:47.650699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435590015.mount: Deactivated successfully. Jan 14 01:07:51.714829 containerd[1618]: time="2026-01-14T01:07:51.709457736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:51.716866 containerd[1618]: time="2026-01-14T01:07:51.715309879Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18554556" Jan 14 01:07:51.719436 containerd[1618]: time="2026-01-14T01:07:51.719043359Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:51.773668 containerd[1618]: time="2026-01-14T01:07:51.773072139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:51.783031 containerd[1618]: time="2026-01-14T01:07:51.780779271Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 7.010674798s" Jan 14 01:07:51.783031 containerd[1618]: time="2026-01-14T01:07:51.781085883Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 14 01:07:51.784337 containerd[1618]: time="2026-01-14T01:07:51.784248149Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 01:07:53.236518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount985502875.mount: Deactivated successfully. Jan 14 01:07:53.249541 containerd[1618]: time="2026-01-14T01:07:53.249241577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:07:53.251263 containerd[1618]: time="2026-01-14T01:07:53.251145718Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:07:53.254510 containerd[1618]: time="2026-01-14T01:07:53.254349974Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:07:53.272022 containerd[1618]: time="2026-01-14T01:07:53.271476479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:07:53.274680 containerd[1618]: time="2026-01-14T01:07:53.274574508Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.490272486s" Jan 14 01:07:53.274680 containerd[1618]: time="2026-01-14T01:07:53.274644982Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 01:07:53.279019 containerd[1618]: time="2026-01-14T01:07:53.278848936Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 14 01:07:53.657572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 01:07:53.661638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:07:54.074253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:07:54.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:54.087013 kernel: audit: type=1130 audit(1768352874.073:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:54.092572 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:07:54.171579 kubelet[2242]: E0114 01:07:54.171432 2242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:07:54.180685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:07:54.181054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:07:54.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:54.181613 systemd[1]: kubelet.service: Consumed 356ms CPU time, 110.7M memory peak. Jan 14 01:07:54.199137 kernel: audit: type=1131 audit(1768352874.180:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:54.266374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106331742.mount: Deactivated successfully. Jan 14 01:07:56.903317 containerd[1618]: time="2026-01-14T01:07:56.903195274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:56.905597 containerd[1618]: time="2026-01-14T01:07:56.904472009Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45505624" Jan 14 01:07:56.906167 containerd[1618]: time="2026-01-14T01:07:56.906025133Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:56.910597 containerd[1618]: time="2026-01-14T01:07:56.910478724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:07:56.912197 containerd[1618]: time="2026-01-14T01:07:56.912081874Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.632984052s" Jan 14 01:07:56.912197 containerd[1618]: time="2026-01-14T01:07:56.912164921Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 14 01:07:59.201155 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:07:59.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:59.201511 systemd[1]: kubelet.service: Consumed 356ms CPU time, 110.7M memory peak. Jan 14 01:07:59.205446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:07:59.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:59.220960 kernel: audit: type=1130 audit(1768352879.200:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:59.221064 kernel: audit: type=1131 audit(1768352879.200:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:07:59.256298 systemd[1]: Reload requested from client PID 2336 ('systemctl') (unit session-8.scope)... Jan 14 01:07:59.256495 systemd[1]: Reloading... Jan 14 01:07:59.393087 zram_generator::config[2385]: No configuration found. Jan 14 01:07:59.745565 systemd[1]: Reloading finished in 487 ms. Jan 14 01:07:59.784000 audit: BPF prog-id=61 op=LOAD Jan 14 01:07:59.792698 kernel: audit: type=1334 audit(1768352879.784:288): prog-id=61 op=LOAD Jan 14 01:07:59.792786 kernel: audit: type=1334 audit(1768352879.784:289): prog-id=50 op=UNLOAD Jan 14 01:07:59.784000 audit: BPF prog-id=50 op=UNLOAD Jan 14 01:07:59.794981 kernel: audit: type=1334 audit(1768352879.785:290): prog-id=62 op=LOAD Jan 14 01:07:59.785000 audit: BPF prog-id=62 op=LOAD Jan 14 01:07:59.785000 audit: BPF prog-id=63 op=LOAD Jan 14 01:07:59.800881 kernel: audit: type=1334 audit(1768352879.785:291): prog-id=63 op=LOAD Jan 14 01:07:59.801004 kernel: audit: type=1334 audit(1768352879.785:292): prog-id=54 op=UNLOAD Jan 14 01:07:59.785000 audit: BPF prog-id=54 op=UNLOAD Jan 14 01:07:59.805514 kernel: audit: type=1334 audit(1768352879.785:293): prog-id=55 op=UNLOAD Jan 14 01:07:59.785000 audit: BPF prog-id=55 op=UNLOAD Jan 14 01:07:59.808219 kernel: audit: type=1334 audit(1768352879.786:294): prog-id=64 op=LOAD Jan 14 01:07:59.786000 audit: BPF prog-id=64 op=LOAD Jan 14 01:07:59.786000 audit: BPF prog-id=51 op=UNLOAD Jan 14 01:07:59.816036 kernel: audit: type=1334 audit(1768352879.786:295): prog-id=51 op=UNLOAD Jan 14 01:07:59.786000 audit: BPF prog-id=65 op=LOAD Jan 14 01:07:59.786000 audit: BPF prog-id=66 op=LOAD Jan 14 01:07:59.786000 audit: BPF prog-id=52 op=UNLOAD Jan 14 01:07:59.786000 audit: BPF prog-id=53 op=UNLOAD Jan 14 01:07:59.787000 audit: BPF prog-id=67 op=LOAD Jan 14 01:07:59.787000 audit: BPF prog-id=56 op=UNLOAD Jan 14 01:07:59.788000 audit: BPF prog-id=68 op=LOAD Jan 14 01:07:59.788000 audit: BPF prog-id=57 op=UNLOAD Jan 14 01:07:59.790000 audit: BPF prog-id=69 op=LOAD Jan 14 01:07:59.790000 audit: BPF prog-id=47 op=UNLOAD Jan 14 01:07:59.790000 audit: BPF prog-id=70 op=LOAD Jan 14 01:07:59.790000 audit: BPF prog-id=71 op=LOAD Jan 14 01:07:59.790000 audit: BPF prog-id=48 op=UNLOAD Jan 14 01:07:59.790000 audit: BPF prog-id=49 op=UNLOAD Jan 14 01:07:59.817000 audit: BPF prog-id=72 op=LOAD Jan 14 01:07:59.817000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:07:59.818000 audit: BPF prog-id=73 op=LOAD Jan 14 01:07:59.818000 audit: BPF prog-id=74 op=LOAD Jan 14 01:07:59.818000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:07:59.818000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:07:59.821000 audit: BPF prog-id=75 op=LOAD Jan 14 01:07:59.821000 audit: BPF prog-id=58 op=UNLOAD Jan 14 01:07:59.821000 audit: BPF prog-id=76 op=LOAD Jan 14 01:07:59.821000 audit: BPF prog-id=77 op=LOAD Jan 14 01:07:59.822000 audit: BPF prog-id=59 op=UNLOAD Jan 14 01:07:59.822000 audit: BPF prog-id=60 op=UNLOAD Jan 14 01:07:59.823000 audit: BPF prog-id=78 op=LOAD Jan 14 01:07:59.823000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:07:59.823000 audit: BPF prog-id=79 op=LOAD Jan 14 01:07:59.823000 audit: BPF prog-id=80 op=LOAD Jan 14 01:07:59.823000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:07:59.823000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:07:59.858859 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:07:59.859092 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:07:59.859604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:07:59.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:07:59.859711 systemd[1]: kubelet.service: Consumed 179ms CPU time, 98.5M memory peak. Jan 14 01:07:59.866649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:08:00.143035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:08:00.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:00.157560 (kubelet)[2430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:08:00.256567 kubelet[2430]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:08:00.256567 kubelet[2430]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:08:00.256567 kubelet[2430]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:08:00.256567 kubelet[2430]: I0114 01:08:00.255663 2430 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:08:00.519437 kubelet[2430]: I0114 01:08:00.519327 2430 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 01:08:00.519437 kubelet[2430]: I0114 01:08:00.519381 2430 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:08:00.519772 kubelet[2430]: I0114 01:08:00.519667 2430 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 01:08:00.559969 kubelet[2430]: E0114 01:08:00.558445 2430 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:08:00.560970 kubelet[2430]: I0114 01:08:00.560246 2430 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:08:00.576065 kubelet[2430]: I0114 01:08:00.575876 2430 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:08:00.583562 kubelet[2430]: I0114 01:08:00.583469 2430 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:08:00.584492 kubelet[2430]: I0114 01:08:00.583809 2430 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:08:00.584492 kubelet[2430]: I0114 01:08:00.583972 2430 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:08:00.584492 kubelet[2430]: I0114 01:08:00.584133 2430 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:08:00.584492 kubelet[2430]: I0114 01:08:00.584142 2430 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 01:08:00.584861 kubelet[2430]: I0114 01:08:00.584265 2430 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:08:00.592991 kubelet[2430]: I0114 01:08:00.592378 2430 kubelet.go:446] "Attempting to sync node with API server" Jan 14 01:08:00.592991 kubelet[2430]: I0114 01:08:00.592533 2430 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:08:00.592991 kubelet[2430]: I0114 01:08:00.592565 2430 kubelet.go:352] "Adding apiserver pod source" Jan 14 01:08:00.592991 kubelet[2430]: I0114 01:08:00.592581 2430 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:08:00.601589 kubelet[2430]: W0114 01:08:00.601132 2430 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 14 01:08:00.601589 kubelet[2430]: E0114 01:08:00.601192 2430 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:08:00.601589 kubelet[2430]: W0114 01:08:00.601347 2430 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 14 01:08:00.601589 kubelet[2430]: E0114 01:08:00.601415 2430 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:08:00.602053 kubelet[2430]: I0114 01:08:00.602035 2430 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:08:00.602777 kubelet[2430]: I0114 01:08:00.602716 2430 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 01:08:00.602831 kubelet[2430]: W0114 01:08:00.602795 2430 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:08:00.606360 kubelet[2430]: I0114 01:08:00.606261 2430 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:08:00.606412 kubelet[2430]: I0114 01:08:00.606384 2430 server.go:1287] "Started kubelet" Jan 14 01:08:00.606547 kubelet[2430]: I0114 01:08:00.606470 2430 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:08:00.608100 kubelet[2430]: I0114 01:08:00.607832 2430 server.go:479] "Adding debug handlers to kubelet server" Jan 14 01:08:00.611417 kubelet[2430]: I0114 01:08:00.611313 2430 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:08:00.612338 kubelet[2430]: I0114 01:08:00.612197 2430 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:08:00.612715 kubelet[2430]: I0114 01:08:00.612573 2430 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:08:00.614203 kubelet[2430]: I0114 01:08:00.614143 2430 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:08:00.614424 kubelet[2430]: I0114 01:08:00.614231 2430 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:08:00.614997 kubelet[2430]: E0114 01:08:00.614867 2430 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:08:00.615163 kubelet[2430]: E0114 01:08:00.615110 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="200ms" Jan 14 01:08:00.616424 kubelet[2430]: I0114 01:08:00.616052 2430 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:08:00.616424 kubelet[2430]: I0114 01:08:00.616131 2430 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:08:00.617020 kubelet[2430]: I0114 01:08:00.616847 2430 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:08:00.618416 kubelet[2430]: E0114 01:08:00.618354 2430 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:08:00.618778 kubelet[2430]: E0114 01:08:00.613792 2430 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.105:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188a739366144adc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-14 01:08:00.60633366 +0000 UTC m=+0.439460950,LastTimestamp:2026-01-14 01:08:00.60633366 +0000 UTC m=+0.439460950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 14 01:08:00.619359 kubelet[2430]: I0114 01:08:00.619322 2430 factory.go:221] Registration of the containerd container factory successfully Jan 14 01:08:00.619359 kubelet[2430]: I0114 01:08:00.619349 2430 factory.go:221] Registration of the systemd container factory successfully Jan 14 01:08:00.617000 audit[2443]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:00.617000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffbaf47ab0 a2=0 a3=0 items=0 ppid=2430 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:08:00.620007 kubelet[2430]: W0114 01:08:00.619972 2430 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 14 01:08:00.620133 kubelet[2430]: E0114 01:08:00.620108 2430 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:08:00.620000 audit[2444]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:00.620000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4deca6a0 a2=0 a3=0 items=0 ppid=2430 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.620000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:08:00.625000 audit[2446]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:00.625000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd0f5e1050 a2=0 a3=0 items=0 ppid=2430 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.625000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:08:00.630000 audit[2448]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:00.630000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd6ff18810 a2=0 a3=0 items=0 ppid=2430 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:08:00.649666 kubelet[2430]: I0114 01:08:00.649561 2430 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:08:00.649666 kubelet[2430]: I0114 01:08:00.649583 2430 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:08:00.649666 kubelet[2430]: I0114 01:08:00.649600 2430 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:08:00.647000 audit[2455]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:00.647000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff2dc01340 a2=0 a3=0 items=0 ppid=2430 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 01:08:00.650807 kubelet[2430]: I0114 01:08:00.650681 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 01:08:00.650000 audit[2457]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:00.650000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe84b189f0 a2=0 a3=0 items=0 ppid=2430 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:08:00.652870 kubelet[2430]: I0114 01:08:00.652784 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 01:08:00.652870 kubelet[2430]: I0114 01:08:00.652850 2430 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 01:08:00.652870 kubelet[2430]: I0114 01:08:00.652869 2430 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:08:00.653006 kubelet[2430]: I0114 01:08:00.652877 2430 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 01:08:00.653006 kubelet[2430]: E0114 01:08:00.652982 2430 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:08:00.651000 audit[2458]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:00.651000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff51d9dbd0 a2=0 a3=0 items=0 ppid=2430 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.651000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:08:00.653739 kubelet[2430]: W0114 01:08:00.653638 2430 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 14 01:08:00.653739 kubelet[2430]: E0114 01:08:00.653710 2430 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:08:00.653000 audit[2461]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:00.653000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0973bfc0 a2=0 a3=0 items=0 ppid=2430 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.653000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:08:00.654000 audit[2460]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:00.654000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffeecf27b0 a2=0 a3=0 items=0 ppid=2430 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.654000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:08:00.655000 audit[2462]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:00.655000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9b9818c0 a2=0 a3=0 items=0 ppid=2430 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.655000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:08:00.658000 audit[2463]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:00.658000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9c5227e0 a2=0 a3=0 items=0 ppid=2430 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.658000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:08:00.660000 audit[2464]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:00.660000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8d0cdcf0 a2=0 a3=0 items=0 ppid=2430 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:00.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:08:00.716379 kubelet[2430]: E0114 01:08:00.716146 2430 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 14 01:08:00.718230 kubelet[2430]: I0114 01:08:00.717636 2430 policy_none.go:49] "None policy: Start" Jan 14 01:08:00.718230 kubelet[2430]: I0114 01:08:00.717702 2430 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:08:00.718230 kubelet[2430]: I0114 01:08:00.717724 2430 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:08:00.733042 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:08:00.749691 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:08:00.754239 kubelet[2430]: E0114 01:08:00.754117 2430 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 01:08:00.756467 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:08:00.779192 kubelet[2430]: I0114 01:08:00.778771 2430 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 01:08:00.781367 kubelet[2430]: I0114 01:08:00.780690 2430 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:08:00.781367 kubelet[2430]: I0114 01:08:00.780737 2430 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:08:00.781367 kubelet[2430]: I0114 01:08:00.781042 2430 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:08:00.784039 kubelet[2430]: E0114 01:08:00.783964 2430 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:08:00.784095 kubelet[2430]: E0114 01:08:00.784067 2430 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 14 01:08:00.816585 kubelet[2430]: E0114 01:08:00.816486 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="400ms" Jan 14 01:08:00.883397 kubelet[2430]: I0114 01:08:00.883305 2430 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:08:00.883671 kubelet[2430]: E0114 01:08:00.883621 2430 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Jan 14 01:08:00.968460 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 14 01:08:00.990133 kubelet[2430]: E0114 01:08:00.989878 2430 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:00.994202 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 14 01:08:01.013132 kubelet[2430]: E0114 01:08:01.013037 2430 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:01.018018 systemd[1]: Created slice kubepods-burstable-pod07748ac6f51c81ff9d198c00ec5fb522.slice - libcontainer container kubepods-burstable-pod07748ac6f51c81ff9d198c00ec5fb522.slice. Jan 14 01:08:01.021298 kubelet[2430]: E0114 01:08:01.021158 2430 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:01.087073 kubelet[2430]: I0114 01:08:01.086772 2430 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:08:01.087711 kubelet[2430]: E0114 01:08:01.087546 2430 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Jan 14 01:08:01.117766 kubelet[2430]: I0114 01:08:01.117657 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07748ac6f51c81ff9d198c00ec5fb522-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07748ac6f51c81ff9d198c00ec5fb522\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:01.117997 kubelet[2430]: I0114 01:08:01.117852 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:01.117997 kubelet[2430]: I0114 01:08:01.117977 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:01.118070 kubelet[2430]: I0114 01:08:01.118014 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:01.118070 kubelet[2430]: I0114 01:08:01.118036 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:01.118070 kubelet[2430]: I0114 01:08:01.118053 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:01.118070 kubelet[2430]: I0114 01:08:01.118066 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07748ac6f51c81ff9d198c00ec5fb522-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07748ac6f51c81ff9d198c00ec5fb522\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:01.118198 kubelet[2430]: I0114 01:08:01.118082 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07748ac6f51c81ff9d198c00ec5fb522-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07748ac6f51c81ff9d198c00ec5fb522\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:01.118198 kubelet[2430]: I0114 01:08:01.118107 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:01.217956 kubelet[2430]: E0114 01:08:01.217784 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="800ms" Jan 14 01:08:01.291517 kubelet[2430]: E0114 01:08:01.291441 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:01.292526 containerd[1618]: time="2026-01-14T01:08:01.292452732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 14 01:08:01.317072 kubelet[2430]: E0114 01:08:01.316660 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:01.317650 containerd[1618]: time="2026-01-14T01:08:01.317487717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 14 01:08:01.322281 kubelet[2430]: E0114 01:08:01.322206 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:01.323444 containerd[1618]: time="2026-01-14T01:08:01.323413062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07748ac6f51c81ff9d198c00ec5fb522,Namespace:kube-system,Attempt:0,}" Jan 14 01:08:01.330322 containerd[1618]: time="2026-01-14T01:08:01.330192232Z" level=info msg="connecting to shim c254fc705ac8c4b711ba3a0921d3c4ba9d06666323ac597ff49721f82692871a" address="unix:///run/containerd/s/8416039e208a8e29452486ac7e4bcce87df76e46195ac9738859e16f2cba967a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:01.367346 systemd[1]: Started cri-containerd-c254fc705ac8c4b711ba3a0921d3c4ba9d06666323ac597ff49721f82692871a.scope - libcontainer container c254fc705ac8c4b711ba3a0921d3c4ba9d06666323ac597ff49721f82692871a. Jan 14 01:08:01.386000 audit: BPF prog-id=81 op=LOAD Jan 14 01:08:01.388000 audit: BPF prog-id=82 op=LOAD Jan 14 01:08:01.388000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2474 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353466633730356163386334623731316261336130393231643363 Jan 14 01:08:01.388000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:08:01.388000 audit[2484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353466633730356163386334623731316261336130393231643363 Jan 14 01:08:01.388000 audit: BPF prog-id=83 op=LOAD Jan 14 01:08:01.388000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2474 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353466633730356163386334623731316261336130393231643363 Jan 14 01:08:01.389000 audit: BPF prog-id=84 op=LOAD Jan 14 01:08:01.389000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2474 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353466633730356163386334623731316261336130393231643363 Jan 14 01:08:01.389000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:08:01.389000 audit[2484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353466633730356163386334623731316261336130393231643363 Jan 14 01:08:01.389000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:08:01.389000 audit[2484]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353466633730356163386334623731316261336130393231643363 Jan 14 01:08:01.390000 audit: BPF prog-id=85 op=LOAD Jan 14 01:08:01.390000 audit[2484]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2474 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.390000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332353466633730356163386334623731316261336130393231643363 Jan 14 01:08:01.489990 kubelet[2430]: I0114 01:08:01.489770 2430 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:08:01.490407 kubelet[2430]: E0114 01:08:01.490340 2430 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Jan 14 01:08:01.518085 containerd[1618]: time="2026-01-14T01:08:01.518000994Z" level=info msg="connecting to shim 9bf5610b529f5054ad6735da0f2845d37a4e6c049eec654218e360587e0da705" address="unix:///run/containerd/s/8d0947e0379a1341b2bd66ec73309a65d7b75103944816d3e330218d7f3cdda4" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:01.520874 containerd[1618]: time="2026-01-14T01:08:01.520800612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c254fc705ac8c4b711ba3a0921d3c4ba9d06666323ac597ff49721f82692871a\"" Jan 14 01:08:01.524761 kubelet[2430]: E0114 01:08:01.524699 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:01.531995 containerd[1618]: time="2026-01-14T01:08:01.531814021Z" level=info msg="CreateContainer within sandbox \"c254fc705ac8c4b711ba3a0921d3c4ba9d06666323ac597ff49721f82692871a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:08:01.543852 containerd[1618]: time="2026-01-14T01:08:01.543663793Z" level=info msg="connecting to shim 4c9e6761e04feae31217b8f9974aa1dfd66eba6a8b346293866c0ce6e8ae2960" address="unix:///run/containerd/s/e558b904a632bd4fcc815bbec02efca1f87c238fabf6d9f5c2c62ce681fbadab" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:01.551338 containerd[1618]: time="2026-01-14T01:08:01.551295013Z" level=info msg="Container f1200b1e2c0b61d39a94f7f177458ea038955f66ed0b2d3b3ef9c5b66996f8c1: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:01.565966 containerd[1618]: time="2026-01-14T01:08:01.565796862Z" level=info msg="CreateContainer within sandbox \"c254fc705ac8c4b711ba3a0921d3c4ba9d06666323ac597ff49721f82692871a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f1200b1e2c0b61d39a94f7f177458ea038955f66ed0b2d3b3ef9c5b66996f8c1\"" Jan 14 01:08:01.569966 containerd[1618]: time="2026-01-14T01:08:01.568171228Z" level=info msg="StartContainer for \"f1200b1e2c0b61d39a94f7f177458ea038955f66ed0b2d3b3ef9c5b66996f8c1\"" Jan 14 01:08:01.570108 containerd[1618]: time="2026-01-14T01:08:01.569875630Z" level=info msg="connecting to shim f1200b1e2c0b61d39a94f7f177458ea038955f66ed0b2d3b3ef9c5b66996f8c1" address="unix:///run/containerd/s/8416039e208a8e29452486ac7e4bcce87df76e46195ac9738859e16f2cba967a" protocol=ttrpc version=3 Jan 14 01:08:01.578314 systemd[1]: Started cri-containerd-9bf5610b529f5054ad6735da0f2845d37a4e6c049eec654218e360587e0da705.scope - libcontainer container 9bf5610b529f5054ad6735da0f2845d37a4e6c049eec654218e360587e0da705. Jan 14 01:08:01.594624 systemd[1]: Started cri-containerd-4c9e6761e04feae31217b8f9974aa1dfd66eba6a8b346293866c0ce6e8ae2960.scope - libcontainer container 4c9e6761e04feae31217b8f9974aa1dfd66eba6a8b346293866c0ce6e8ae2960. Jan 14 01:08:01.602081 systemd[1]: Started cri-containerd-f1200b1e2c0b61d39a94f7f177458ea038955f66ed0b2d3b3ef9c5b66996f8c1.scope - libcontainer container f1200b1e2c0b61d39a94f7f177458ea038955f66ed0b2d3b3ef9c5b66996f8c1. Jan 14 01:08:01.606000 audit: BPF prog-id=86 op=LOAD Jan 14 01:08:01.607000 audit: BPF prog-id=87 op=LOAD Jan 14 01:08:01.607000 audit[2535]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2519 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663536313062353239663530353461643637333564613066323834 Jan 14 01:08:01.607000 audit: BPF prog-id=87 op=UNLOAD Jan 14 01:08:01.607000 audit[2535]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663536313062353239663530353461643637333564613066323834 Jan 14 01:08:01.607000 audit: BPF prog-id=88 op=LOAD Jan 14 01:08:01.607000 audit[2535]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2519 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663536313062353239663530353461643637333564613066323834 Jan 14 01:08:01.607000 audit: BPF prog-id=89 op=LOAD Jan 14 01:08:01.607000 audit[2535]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2519 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663536313062353239663530353461643637333564613066323834 Jan 14 01:08:01.607000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:08:01.607000 audit[2535]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663536313062353239663530353461643637333564613066323834 Jan 14 01:08:01.607000 audit: BPF prog-id=88 op=UNLOAD Jan 14 01:08:01.607000 audit[2535]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663536313062353239663530353461643637333564613066323834 Jan 14 01:08:01.607000 audit: BPF prog-id=90 op=LOAD Jan 14 01:08:01.607000 audit[2535]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2519 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962663536313062353239663530353461643637333564613066323834 Jan 14 01:08:01.625000 audit: BPF prog-id=91 op=LOAD Jan 14 01:08:01.627000 audit: BPF prog-id=92 op=LOAD Jan 14 01:08:01.627000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2540 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463396536373631653034666561653331323137623866393937346161 Jan 14 01:08:01.627000 audit: BPF prog-id=92 op=UNLOAD Jan 14 01:08:01.627000 audit[2563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463396536373631653034666561653331323137623866393937346161 Jan 14 01:08:01.628000 audit: BPF prog-id=93 op=LOAD Jan 14 01:08:01.628000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2540 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463396536373631653034666561653331323137623866393937346161 Jan 14 01:08:01.628000 audit: BPF prog-id=94 op=LOAD Jan 14 01:08:01.628000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2540 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463396536373631653034666561653331323137623866393937346161 Jan 14 01:08:01.628000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:08:01.628000 audit[2563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463396536373631653034666561653331323137623866393937346161 Jan 14 01:08:01.628000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:08:01.628000 audit[2563]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463396536373631653034666561653331323137623866393937346161 Jan 14 01:08:01.628000 audit: BPF prog-id=95 op=LOAD Jan 14 01:08:01.628000 audit[2563]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2540 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463396536373631653034666561653331323137623866393937346161 Jan 14 01:08:01.631000 audit: BPF prog-id=96 op=LOAD Jan 14 01:08:01.632000 audit: BPF prog-id=97 op=LOAD Jan 14 01:08:01.632000 audit[2570]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2474 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631323030623165326330623631643339613934663766313737343538 Jan 14 01:08:01.632000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:08:01.632000 audit[2570]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631323030623165326330623631643339613934663766313737343538 Jan 14 01:08:01.633000 audit: BPF prog-id=98 op=LOAD Jan 14 01:08:01.633000 audit[2570]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2474 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631323030623165326330623631643339613934663766313737343538 Jan 14 01:08:01.633000 audit: BPF prog-id=99 op=LOAD Jan 14 01:08:01.633000 audit[2570]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2474 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631323030623165326330623631643339613934663766313737343538 Jan 14 01:08:01.633000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:08:01.633000 audit[2570]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631323030623165326330623631643339613934663766313737343538 Jan 14 01:08:01.634000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:08:01.634000 audit[2570]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2474 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631323030623165326330623631643339613934663766313737343538 Jan 14 01:08:01.634000 audit: BPF prog-id=100 op=LOAD Jan 14 01:08:01.634000 audit[2570]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2474 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631323030623165326330623631643339613934663766313737343538 Jan 14 01:08:01.696099 containerd[1618]: time="2026-01-14T01:08:01.695967920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07748ac6f51c81ff9d198c00ec5fb522,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c9e6761e04feae31217b8f9974aa1dfd66eba6a8b346293866c0ce6e8ae2960\"" Jan 14 01:08:01.697453 containerd[1618]: time="2026-01-14T01:08:01.697313096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bf5610b529f5054ad6735da0f2845d37a4e6c049eec654218e360587e0da705\"" Jan 14 01:08:01.698296 kubelet[2430]: E0114 01:08:01.698230 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:01.698686 kubelet[2430]: E0114 01:08:01.698470 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:01.703942 containerd[1618]: time="2026-01-14T01:08:01.703810150Z" level=info msg="CreateContainer within sandbox \"9bf5610b529f5054ad6735da0f2845d37a4e6c049eec654218e360587e0da705\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:08:01.705613 containerd[1618]: time="2026-01-14T01:08:01.705498053Z" level=info msg="CreateContainer within sandbox \"4c9e6761e04feae31217b8f9974aa1dfd66eba6a8b346293866c0ce6e8ae2960\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:08:01.726331 containerd[1618]: time="2026-01-14T01:08:01.726251052Z" level=info msg="StartContainer for \"f1200b1e2c0b61d39a94f7f177458ea038955f66ed0b2d3b3ef9c5b66996f8c1\" returns successfully" Jan 14 01:08:01.731048 containerd[1618]: time="2026-01-14T01:08:01.730779576Z" level=info msg="Container a76e73e0ef653d0e4f11f52e92ea7999ffed2fbf306c589aceb2e110bec1c81d: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:01.737707 containerd[1618]: time="2026-01-14T01:08:01.737630541Z" level=info msg="Container d5cf5b918bf6c6e3d5d03ffe8b1d2344c6a39237e2c387abc5acb766e6eef455: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:01.748409 containerd[1618]: time="2026-01-14T01:08:01.746285819Z" level=info msg="CreateContainer within sandbox \"9bf5610b529f5054ad6735da0f2845d37a4e6c049eec654218e360587e0da705\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a76e73e0ef653d0e4f11f52e92ea7999ffed2fbf306c589aceb2e110bec1c81d\"" Jan 14 01:08:01.749119 containerd[1618]: time="2026-01-14T01:08:01.749093172Z" level=info msg="StartContainer for \"a76e73e0ef653d0e4f11f52e92ea7999ffed2fbf306c589aceb2e110bec1c81d\"" Jan 14 01:08:01.750985 containerd[1618]: time="2026-01-14T01:08:01.750862601Z" level=info msg="connecting to shim a76e73e0ef653d0e4f11f52e92ea7999ffed2fbf306c589aceb2e110bec1c81d" address="unix:///run/containerd/s/8d0947e0379a1341b2bd66ec73309a65d7b75103944816d3e330218d7f3cdda4" protocol=ttrpc version=3 Jan 14 01:08:01.758162 containerd[1618]: time="2026-01-14T01:08:01.758130400Z" level=info msg="CreateContainer within sandbox \"4c9e6761e04feae31217b8f9974aa1dfd66eba6a8b346293866c0ce6e8ae2960\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d5cf5b918bf6c6e3d5d03ffe8b1d2344c6a39237e2c387abc5acb766e6eef455\"" Jan 14 01:08:01.759010 containerd[1618]: time="2026-01-14T01:08:01.758984094Z" level=info msg="StartContainer for \"d5cf5b918bf6c6e3d5d03ffe8b1d2344c6a39237e2c387abc5acb766e6eef455\"" Jan 14 01:08:01.760780 containerd[1618]: time="2026-01-14T01:08:01.760713191Z" level=info msg="connecting to shim d5cf5b918bf6c6e3d5d03ffe8b1d2344c6a39237e2c387abc5acb766e6eef455" address="unix:///run/containerd/s/e558b904a632bd4fcc815bbec02efca1f87c238fabf6d9f5c2c62ce681fbadab" protocol=ttrpc version=3 Jan 14 01:08:01.782177 systemd[1]: Started cri-containerd-a76e73e0ef653d0e4f11f52e92ea7999ffed2fbf306c589aceb2e110bec1c81d.scope - libcontainer container a76e73e0ef653d0e4f11f52e92ea7999ffed2fbf306c589aceb2e110bec1c81d. Jan 14 01:08:01.788767 kubelet[2430]: W0114 01:08:01.788634 2430 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.105:6443: connect: connection refused Jan 14 01:08:01.788767 kubelet[2430]: E0114 01:08:01.788706 2430 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:08:01.798265 systemd[1]: Started cri-containerd-d5cf5b918bf6c6e3d5d03ffe8b1d2344c6a39237e2c387abc5acb766e6eef455.scope - libcontainer container d5cf5b918bf6c6e3d5d03ffe8b1d2344c6a39237e2c387abc5acb766e6eef455. Jan 14 01:08:01.818000 audit: BPF prog-id=101 op=LOAD Jan 14 01:08:01.819000 audit: BPF prog-id=102 op=LOAD Jan 14 01:08:01.819000 audit[2636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2519 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366537336530656636353364306534663131663532653932656137 Jan 14 01:08:01.819000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:08:01.819000 audit[2636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366537336530656636353364306534663131663532653932656137 Jan 14 01:08:01.819000 audit: BPF prog-id=103 op=LOAD Jan 14 01:08:01.819000 audit[2636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2519 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366537336530656636353364306534663131663532653932656137 Jan 14 01:08:01.819000 audit: BPF prog-id=104 op=LOAD Jan 14 01:08:01.819000 audit[2636]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2519 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366537336530656636353364306534663131663532653932656137 Jan 14 01:08:01.819000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:08:01.819000 audit[2636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366537336530656636353364306534663131663532653932656137 Jan 14 01:08:01.819000 audit: BPF prog-id=103 op=UNLOAD Jan 14 01:08:01.819000 audit[2636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2519 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366537336530656636353364306534663131663532653932656137 Jan 14 01:08:01.819000 audit: BPF prog-id=105 op=LOAD Jan 14 01:08:01.819000 audit[2636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2519 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366537336530656636353364306534663131663532653932656137 Jan 14 01:08:01.831000 audit: BPF prog-id=106 op=LOAD Jan 14 01:08:01.832000 audit: BPF prog-id=107 op=LOAD Jan 14 01:08:01.832000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2540 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435636635623931386266366336653364356430336666653862316432 Jan 14 01:08:01.832000 audit: BPF prog-id=107 op=UNLOAD Jan 14 01:08:01.832000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435636635623931386266366336653364356430336666653862316432 Jan 14 01:08:01.833000 audit: BPF prog-id=108 op=LOAD Jan 14 01:08:01.833000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2540 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435636635623931386266366336653364356430336666653862316432 Jan 14 01:08:01.833000 audit: BPF prog-id=109 op=LOAD Jan 14 01:08:01.833000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2540 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435636635623931386266366336653364356430336666653862316432 Jan 14 01:08:01.833000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:08:01.833000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435636635623931386266366336653364356430336666653862316432 Jan 14 01:08:01.833000 audit: BPF prog-id=108 op=UNLOAD Jan 14 01:08:01.833000 audit[2643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2540 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435636635623931386266366336653364356430336666653862316432 Jan 14 01:08:01.833000 audit: BPF prog-id=110 op=LOAD Jan 14 01:08:01.833000 audit[2643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2540 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:01.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435636635623931386266366336653364356430336666653862316432 Jan 14 01:08:01.900364 containerd[1618]: time="2026-01-14T01:08:01.900162255Z" level=info msg="StartContainer for \"a76e73e0ef653d0e4f11f52e92ea7999ffed2fbf306c589aceb2e110bec1c81d\" returns successfully" Jan 14 01:08:01.921472 containerd[1618]: time="2026-01-14T01:08:01.921343142Z" level=info msg="StartContainer for \"d5cf5b918bf6c6e3d5d03ffe8b1d2344c6a39237e2c387abc5acb766e6eef455\" returns successfully" Jan 14 01:08:02.293711 kubelet[2430]: I0114 01:08:02.293198 2430 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:08:02.446269 update_engine[1597]: I20260114 01:08:02.443979 1597 update_attempter.cc:509] Updating boot flags... Jan 14 01:08:02.678853 kubelet[2430]: E0114 01:08:02.678624 2430 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:02.678853 kubelet[2430]: E0114 01:08:02.678760 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:02.682758 kubelet[2430]: E0114 01:08:02.682592 2430 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:02.682758 kubelet[2430]: E0114 01:08:02.682701 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:02.688266 kubelet[2430]: E0114 01:08:02.688242 2430 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:02.689001 kubelet[2430]: E0114 01:08:02.688965 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:03.691328 kubelet[2430]: E0114 01:08:03.691241 2430 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:03.691879 kubelet[2430]: E0114 01:08:03.691427 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:03.693219 kubelet[2430]: E0114 01:08:03.693147 2430 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:03.693349 kubelet[2430]: E0114 01:08:03.693283 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:03.851985 kubelet[2430]: E0114 01:08:03.850867 2430 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 14 01:08:03.921407 kubelet[2430]: E0114 01:08:03.921265 2430 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 14 01:08:03.921837 kubelet[2430]: E0114 01:08:03.921574 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:03.945006 kubelet[2430]: I0114 01:08:03.944499 2430 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 01:08:04.015752 kubelet[2430]: I0114 01:08:04.015595 2430 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:04.028398 kubelet[2430]: E0114 01:08:04.028355 2430 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:04.028398 kubelet[2430]: I0114 01:08:04.028387 2430 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:04.031523 kubelet[2430]: E0114 01:08:04.031448 2430 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:04.031523 kubelet[2430]: I0114 01:08:04.031499 2430 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:04.033846 kubelet[2430]: E0114 01:08:04.033735 2430 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:04.600585 kubelet[2430]: I0114 01:08:04.600411 2430 apiserver.go:52] "Watching apiserver" Jan 14 01:08:04.616993 kubelet[2430]: I0114 01:08:04.616818 2430 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:08:04.690704 kubelet[2430]: I0114 01:08:04.690670 2430 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:04.691219 kubelet[2430]: I0114 01:08:04.690672 2430 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:04.693712 kubelet[2430]: E0114 01:08:04.693601 2430 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:04.694280 kubelet[2430]: E0114 01:08:04.693779 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:04.694280 kubelet[2430]: E0114 01:08:04.693625 2430 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:04.694280 kubelet[2430]: E0114 01:08:04.693974 2430 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:06.676815 systemd[1]: Reload requested from client PID 2726 ('systemctl') (unit session-8.scope)... Jan 14 01:08:06.676877 systemd[1]: Reloading... Jan 14 01:08:06.846118 zram_generator::config[2774]: No configuration found. Jan 14 01:08:07.188442 systemd[1]: Reloading finished in 510 ms. Jan 14 01:08:07.258639 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:08:07.267756 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:08:07.269343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:08:07.269484 systemd[1]: kubelet.service: Consumed 1.208s CPU time, 130.3M memory peak. Jan 14 01:08:07.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:07.273421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:08:07.290177 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 01:08:07.290336 kernel: audit: type=1131 audit(1768352887.268:390): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:07.274000 audit: BPF prog-id=111 op=LOAD Jan 14 01:08:07.274000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:08:07.296589 kernel: audit: type=1334 audit(1768352887.274:391): prog-id=111 op=LOAD Jan 14 01:08:07.296678 kernel: audit: type=1334 audit(1768352887.274:392): prog-id=67 op=UNLOAD Jan 14 01:08:07.296727 kernel: audit: type=1334 audit(1768352887.275:393): prog-id=112 op=LOAD Jan 14 01:08:07.275000 audit: BPF prog-id=112 op=LOAD Jan 14 01:08:07.300048 kernel: audit: type=1334 audit(1768352887.275:394): prog-id=61 op=UNLOAD Jan 14 01:08:07.275000 audit: BPF prog-id=61 op=UNLOAD Jan 14 01:08:07.278000 audit: BPF prog-id=113 op=LOAD Jan 14 01:08:07.306194 kernel: audit: type=1334 audit(1768352887.278:395): prog-id=113 op=LOAD Jan 14 01:08:07.306254 kernel: audit: type=1334 audit(1768352887.278:396): prog-id=69 op=UNLOAD Jan 14 01:08:07.278000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:08:07.310016 kernel: audit: type=1334 audit(1768352887.278:397): prog-id=114 op=LOAD Jan 14 01:08:07.278000 audit: BPF prog-id=114 op=LOAD Jan 14 01:08:07.278000 audit: BPF prog-id=115 op=LOAD Jan 14 01:08:07.317047 kernel: audit: type=1334 audit(1768352887.278:398): prog-id=115 op=LOAD Jan 14 01:08:07.317115 kernel: audit: type=1334 audit(1768352887.278:399): prog-id=70 op=UNLOAD Jan 14 01:08:07.278000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:08:07.278000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:08:07.279000 audit: BPF prog-id=116 op=LOAD Jan 14 01:08:07.279000 audit: BPF prog-id=64 op=UNLOAD Jan 14 01:08:07.279000 audit: BPF prog-id=117 op=LOAD Jan 14 01:08:07.279000 audit: BPF prog-id=118 op=LOAD Jan 14 01:08:07.279000 audit: BPF prog-id=65 op=UNLOAD Jan 14 01:08:07.279000 audit: BPF prog-id=66 op=UNLOAD Jan 14 01:08:07.281000 audit: BPF prog-id=119 op=LOAD Jan 14 01:08:07.281000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:08:07.281000 audit: BPF prog-id=120 op=LOAD Jan 14 01:08:07.281000 audit: BPF prog-id=121 op=LOAD Jan 14 01:08:07.281000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:08:07.281000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:08:07.285000 audit: BPF prog-id=122 op=LOAD Jan 14 01:08:07.285000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:08:07.285000 audit: BPF prog-id=123 op=LOAD Jan 14 01:08:07.285000 audit: BPF prog-id=124 op=LOAD Jan 14 01:08:07.285000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:08:07.285000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:08:07.286000 audit: BPF prog-id=125 op=LOAD Jan 14 01:08:07.286000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:08:07.286000 audit: BPF prog-id=126 op=LOAD Jan 14 01:08:07.286000 audit: BPF prog-id=127 op=LOAD Jan 14 01:08:07.286000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:08:07.286000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:08:07.288000 audit: BPF prog-id=128 op=LOAD Jan 14 01:08:07.288000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:08:07.288000 audit: BPF prog-id=129 op=LOAD Jan 14 01:08:07.311000 audit: BPF prog-id=130 op=LOAD Jan 14 01:08:07.311000 audit: BPF prog-id=62 op=UNLOAD Jan 14 01:08:07.311000 audit: BPF prog-id=63 op=UNLOAD Jan 14 01:08:07.552665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:08:07.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:07.566428 (kubelet)[2817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:08:07.639957 kubelet[2817]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:08:07.639957 kubelet[2817]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:08:07.639957 kubelet[2817]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:08:07.640429 kubelet[2817]: I0114 01:08:07.640113 2817 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:08:07.650692 kubelet[2817]: I0114 01:08:07.650559 2817 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 01:08:07.650692 kubelet[2817]: I0114 01:08:07.650612 2817 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:08:07.650982 kubelet[2817]: I0114 01:08:07.650868 2817 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 01:08:07.653021 kubelet[2817]: I0114 01:08:07.652872 2817 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 01:08:07.658048 kubelet[2817]: I0114 01:08:07.657978 2817 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:08:07.671620 kubelet[2817]: I0114 01:08:07.671532 2817 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:08:07.679700 kubelet[2817]: I0114 01:08:07.679164 2817 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:08:07.679776 kubelet[2817]: I0114 01:08:07.679728 2817 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:08:07.682952 kubelet[2817]: I0114 01:08:07.679760 2817 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:08:07.682952 kubelet[2817]: I0114 01:08:07.680493 2817 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:08:07.682952 kubelet[2817]: I0114 01:08:07.680511 2817 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 01:08:07.682952 kubelet[2817]: I0114 01:08:07.680580 2817 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:08:07.682952 kubelet[2817]: I0114 01:08:07.680758 2817 kubelet.go:446] "Attempting to sync node with API server" Jan 14 01:08:07.683257 kubelet[2817]: I0114 01:08:07.680785 2817 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:08:07.683257 kubelet[2817]: I0114 01:08:07.680811 2817 kubelet.go:352] "Adding apiserver pod source" Jan 14 01:08:07.683257 kubelet[2817]: I0114 01:08:07.680825 2817 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:08:07.683257 kubelet[2817]: I0114 01:08:07.681626 2817 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:08:07.683257 kubelet[2817]: I0114 01:08:07.682265 2817 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 01:08:07.683603 kubelet[2817]: I0114 01:08:07.683583 2817 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:08:07.683698 kubelet[2817]: I0114 01:08:07.683685 2817 server.go:1287] "Started kubelet" Jan 14 01:08:07.686276 kubelet[2817]: I0114 01:08:07.686091 2817 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:08:07.686594 kubelet[2817]: I0114 01:08:07.686493 2817 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:08:07.686645 kubelet[2817]: I0114 01:08:07.686603 2817 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:08:07.687307 kubelet[2817]: I0114 01:08:07.687289 2817 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:08:07.687823 kubelet[2817]: I0114 01:08:07.687750 2817 server.go:479] "Adding debug handlers to kubelet server" Jan 14 01:08:07.691483 kubelet[2817]: E0114 01:08:07.691127 2817 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:08:07.694726 kubelet[2817]: I0114 01:08:07.694633 2817 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:08:07.705794 kubelet[2817]: I0114 01:08:07.705752 2817 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:08:07.706157 kubelet[2817]: I0114 01:08:07.705978 2817 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:08:07.706157 kubelet[2817]: I0114 01:08:07.706124 2817 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:08:07.710036 kubelet[2817]: I0114 01:08:07.709820 2817 factory.go:221] Registration of the systemd container factory successfully Jan 14 01:08:07.710036 kubelet[2817]: I0114 01:08:07.710029 2817 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:08:07.723018 kubelet[2817]: I0114 01:08:07.722967 2817 factory.go:221] Registration of the containerd container factory successfully Jan 14 01:08:07.728351 kubelet[2817]: I0114 01:08:07.727866 2817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 01:08:07.731189 kubelet[2817]: I0114 01:08:07.730657 2817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 01:08:07.731189 kubelet[2817]: I0114 01:08:07.730680 2817 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 01:08:07.731189 kubelet[2817]: I0114 01:08:07.730699 2817 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:08:07.731189 kubelet[2817]: I0114 01:08:07.730706 2817 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 01:08:07.731189 kubelet[2817]: E0114 01:08:07.730752 2817 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:08:07.791988 kubelet[2817]: I0114 01:08:07.790623 2817 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:08:07.791988 kubelet[2817]: I0114 01:08:07.790639 2817 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:08:07.791988 kubelet[2817]: I0114 01:08:07.790656 2817 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:08:07.791988 kubelet[2817]: I0114 01:08:07.790798 2817 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:08:07.791988 kubelet[2817]: I0114 01:08:07.790807 2817 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:08:07.791988 kubelet[2817]: I0114 01:08:07.790823 2817 policy_none.go:49] "None policy: Start" Jan 14 01:08:07.791988 kubelet[2817]: I0114 01:08:07.790832 2817 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:08:07.791988 kubelet[2817]: I0114 01:08:07.790841 2817 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:08:07.791988 kubelet[2817]: I0114 01:08:07.791061 2817 state_mem.go:75] "Updated machine memory state" Jan 14 01:08:07.799917 kubelet[2817]: I0114 01:08:07.799611 2817 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 01:08:07.799917 kubelet[2817]: I0114 01:08:07.799800 2817 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:08:07.799917 kubelet[2817]: I0114 01:08:07.799815 2817 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:08:07.804258 kubelet[2817]: I0114 01:08:07.804081 2817 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:08:07.809735 kubelet[2817]: E0114 01:08:07.809681 2817 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:08:07.832605 kubelet[2817]: I0114 01:08:07.832516 2817 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:07.834149 kubelet[2817]: I0114 01:08:07.833702 2817 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:07.834149 kubelet[2817]: I0114 01:08:07.834031 2817 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:07.910102 kubelet[2817]: I0114 01:08:07.910031 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07748ac6f51c81ff9d198c00ec5fb522-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07748ac6f51c81ff9d198c00ec5fb522\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:07.910102 kubelet[2817]: I0114 01:08:07.910088 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:07.910495 kubelet[2817]: I0114 01:08:07.910118 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:07.910495 kubelet[2817]: I0114 01:08:07.910142 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:07.910495 kubelet[2817]: I0114 01:08:07.910168 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07748ac6f51c81ff9d198c00ec5fb522-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07748ac6f51c81ff9d198c00ec5fb522\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:07.910495 kubelet[2817]: I0114 01:08:07.910189 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07748ac6f51c81ff9d198c00ec5fb522-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07748ac6f51c81ff9d198c00ec5fb522\") " pod="kube-system/kube-apiserver-localhost" Jan 14 01:08:07.910495 kubelet[2817]: I0114 01:08:07.910210 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:07.910652 kubelet[2817]: I0114 01:08:07.910230 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:07.910652 kubelet[2817]: I0114 01:08:07.910253 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 14 01:08:07.922675 kubelet[2817]: I0114 01:08:07.922626 2817 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 14 01:08:07.952967 kubelet[2817]: I0114 01:08:07.952332 2817 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 14 01:08:07.952967 kubelet[2817]: I0114 01:08:07.952432 2817 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 14 01:08:08.150566 kubelet[2817]: E0114 01:08:08.150510 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:08.157360 kubelet[2817]: E0114 01:08:08.157152 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:08.158066 kubelet[2817]: E0114 01:08:08.157837 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:08.684177 kubelet[2817]: I0114 01:08:08.684052 2817 apiserver.go:52] "Watching apiserver" Jan 14 01:08:08.706779 kubelet[2817]: I0114 01:08:08.706246 2817 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:08:08.769968 kubelet[2817]: E0114 01:08:08.769379 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:08.770491 kubelet[2817]: I0114 01:08:08.770334 2817 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:08.774047 kubelet[2817]: E0114 01:08:08.773681 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:08.812679 kubelet[2817]: E0114 01:08:08.812134 2817 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 14 01:08:08.812679 kubelet[2817]: E0114 01:08:08.812633 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:08.850490 kubelet[2817]: I0114 01:08:08.849647 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.849630571 podStartE2EDuration="1.849630571s" podCreationTimestamp="2026-01-14 01:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:08:08.849176641 +0000 UTC m=+1.276576484" watchObservedRunningTime="2026-01-14 01:08:08.849630571 +0000 UTC m=+1.277030425" Jan 14 01:08:08.850490 kubelet[2817]: I0114 01:08:08.849775 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8497694340000002 podStartE2EDuration="1.849769434s" podCreationTimestamp="2026-01-14 01:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:08:08.81430576 +0000 UTC m=+1.241705614" watchObservedRunningTime="2026-01-14 01:08:08.849769434 +0000 UTC m=+1.277169288" Jan 14 01:08:08.876316 kubelet[2817]: I0114 01:08:08.875530 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.875507824 podStartE2EDuration="1.875507824s" podCreationTimestamp="2026-01-14 01:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:08:08.875121627 +0000 UTC m=+1.302521491" watchObservedRunningTime="2026-01-14 01:08:08.875507824 +0000 UTC m=+1.302907678" Jan 14 01:08:09.772555 kubelet[2817]: E0114 01:08:09.772428 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:09.774044 kubelet[2817]: E0114 01:08:09.773364 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:10.775806 kubelet[2817]: E0114 01:08:10.775614 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:11.245579 kubelet[2817]: I0114 01:08:11.245454 2817 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:08:11.246384 containerd[1618]: time="2026-01-14T01:08:11.246321909Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:08:11.247569 kubelet[2817]: I0114 01:08:11.247539 2817 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:08:11.954364 systemd[1]: Created slice kubepods-besteffort-pod6e1822c3_1e07_4494_b478_beb48b6988e3.slice - libcontainer container kubepods-besteffort-pod6e1822c3_1e07_4494_b478_beb48b6988e3.slice. Jan 14 01:08:12.044304 kubelet[2817]: I0114 01:08:12.044197 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e1822c3-1e07-4494-b478-beb48b6988e3-kube-proxy\") pod \"kube-proxy-h7f5b\" (UID: \"6e1822c3-1e07-4494-b478-beb48b6988e3\") " pod="kube-system/kube-proxy-h7f5b" Jan 14 01:08:12.044799 kubelet[2817]: I0114 01:08:12.044380 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e1822c3-1e07-4494-b478-beb48b6988e3-xtables-lock\") pod \"kube-proxy-h7f5b\" (UID: \"6e1822c3-1e07-4494-b478-beb48b6988e3\") " pod="kube-system/kube-proxy-h7f5b" Jan 14 01:08:12.044799 kubelet[2817]: I0114 01:08:12.044421 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e1822c3-1e07-4494-b478-beb48b6988e3-lib-modules\") pod \"kube-proxy-h7f5b\" (UID: \"6e1822c3-1e07-4494-b478-beb48b6988e3\") " pod="kube-system/kube-proxy-h7f5b" Jan 14 01:08:12.044799 kubelet[2817]: I0114 01:08:12.044456 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67z8j\" (UniqueName: \"kubernetes.io/projected/6e1822c3-1e07-4494-b478-beb48b6988e3-kube-api-access-67z8j\") pod \"kube-proxy-h7f5b\" (UID: \"6e1822c3-1e07-4494-b478-beb48b6988e3\") " pod="kube-system/kube-proxy-h7f5b" Jan 14 01:08:12.157562 kubelet[2817]: E0114 01:08:12.157482 2817 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 14 01:08:12.157562 kubelet[2817]: E0114 01:08:12.157552 2817 projected.go:194] Error preparing data for projected volume kube-api-access-67z8j for pod kube-system/kube-proxy-h7f5b: configmap "kube-root-ca.crt" not found Jan 14 01:08:12.158013 kubelet[2817]: E0114 01:08:12.157625 2817 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e1822c3-1e07-4494-b478-beb48b6988e3-kube-api-access-67z8j podName:6e1822c3-1e07-4494-b478-beb48b6988e3 nodeName:}" failed. No retries permitted until 2026-01-14 01:08:12.657598406 +0000 UTC m=+5.084998260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-67z8j" (UniqueName: "kubernetes.io/projected/6e1822c3-1e07-4494-b478-beb48b6988e3-kube-api-access-67z8j") pod "kube-proxy-h7f5b" (UID: "6e1822c3-1e07-4494-b478-beb48b6988e3") : configmap "kube-root-ca.crt" not found Jan 14 01:08:12.393409 systemd[1]: Created slice kubepods-besteffort-pod22e709eb_630e_4b97_a1dc_ef0fad2bd1d3.slice - libcontainer container kubepods-besteffort-pod22e709eb_630e_4b97_a1dc_ef0fad2bd1d3.slice. Jan 14 01:08:12.448106 kubelet[2817]: I0114 01:08:12.447865 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/22e709eb-630e-4b97-a1dc-ef0fad2bd1d3-var-lib-calico\") pod \"tigera-operator-7dcd859c48-lxvfw\" (UID: \"22e709eb-630e-4b97-a1dc-ef0fad2bd1d3\") " pod="tigera-operator/tigera-operator-7dcd859c48-lxvfw" Jan 14 01:08:12.448281 kubelet[2817]: I0114 01:08:12.448161 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8428m\" (UniqueName: \"kubernetes.io/projected/22e709eb-630e-4b97-a1dc-ef0fad2bd1d3-kube-api-access-8428m\") pod \"tigera-operator-7dcd859c48-lxvfw\" (UID: \"22e709eb-630e-4b97-a1dc-ef0fad2bd1d3\") " pod="tigera-operator/tigera-operator-7dcd859c48-lxvfw" Jan 14 01:08:12.701123 containerd[1618]: time="2026-01-14T01:08:12.700498461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lxvfw,Uid:22e709eb-630e-4b97-a1dc-ef0fad2bd1d3,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:08:12.824828 containerd[1618]: time="2026-01-14T01:08:12.824363271Z" level=info msg="connecting to shim 6876da9f8266f89c60b5a367e9da0d508f77381dd1ba232c9d6e18e12d9c5e0f" address="unix:///run/containerd/s/011748a168335e10c0c9715bbbc6c6556ad28d14db3c307ecc6e139b97d4a0e4" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:12.872732 kubelet[2817]: E0114 01:08:12.872510 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:12.874835 containerd[1618]: time="2026-01-14T01:08:12.874670474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h7f5b,Uid:6e1822c3-1e07-4494-b478-beb48b6988e3,Namespace:kube-system,Attempt:0,}" Jan 14 01:08:12.910257 systemd[1]: Started cri-containerd-6876da9f8266f89c60b5a367e9da0d508f77381dd1ba232c9d6e18e12d9c5e0f.scope - libcontainer container 6876da9f8266f89c60b5a367e9da0d508f77381dd1ba232c9d6e18e12d9c5e0f. Jan 14 01:08:12.918852 containerd[1618]: time="2026-01-14T01:08:12.918148918Z" level=info msg="connecting to shim 05fa48b506fcace6937f449f2585b2846a32b470cc8be0290b8cc6d95515a00b" address="unix:///run/containerd/s/729085273fed60476d88cb15e64fcc6abdd9cbcd9f4a1d2bd65d1f42ff7c5f5c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:12.940000 audit: BPF prog-id=131 op=LOAD Jan 14 01:08:12.944941 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 01:08:12.945043 kernel: audit: type=1334 audit(1768352892.940:432): prog-id=131 op=LOAD Jan 14 01:08:12.941000 audit: BPF prog-id=132 op=LOAD Jan 14 01:08:12.951252 kernel: audit: type=1334 audit(1768352892.941:433): prog-id=132 op=LOAD Jan 14 01:08:12.964948 kernel: audit: type=1300 audit(1768352892.941:433): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:12.941000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:12.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373664613966383236366638396336306235613336376539646130 Jan 14 01:08:12.976021 kernel: audit: type=1327 audit(1768352892.941:433): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373664613966383236366638396336306235613336376539646130 Jan 14 01:08:12.976090 kernel: audit: type=1334 audit(1768352892.941:434): prog-id=132 op=UNLOAD Jan 14 01:08:12.941000 audit: BPF prog-id=132 op=UNLOAD Jan 14 01:08:12.941000 audit[2887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:12.980318 systemd[1]: Started cri-containerd-05fa48b506fcace6937f449f2585b2846a32b470cc8be0290b8cc6d95515a00b.scope - libcontainer container 05fa48b506fcace6937f449f2585b2846a32b470cc8be0290b8cc6d95515a00b. Jan 14 01:08:12.991989 kernel: audit: type=1300 audit(1768352892.941:434): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:12.992060 kernel: audit: type=1327 audit(1768352892.941:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373664613966383236366638396336306235613336376539646130 Jan 14 01:08:12.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373664613966383236366638396336306235613336376539646130 Jan 14 01:08:12.941000 audit: BPF prog-id=133 op=LOAD Jan 14 01:08:13.007496 kernel: audit: type=1334 audit(1768352892.941:435): prog-id=133 op=LOAD Jan 14 01:08:13.007570 kernel: audit: type=1300 audit(1768352892.941:435): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:12.941000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:12.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373664613966383236366638396336306235613336376539646130 Jan 14 01:08:13.033962 kernel: audit: type=1327 audit(1768352892.941:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373664613966383236366638396336306235613336376539646130 Jan 14 01:08:12.941000 audit: BPF prog-id=134 op=LOAD Jan 14 01:08:12.941000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:12.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373664613966383236366638396336306235613336376539646130 Jan 14 01:08:12.942000 audit: BPF prog-id=134 op=UNLOAD Jan 14 01:08:12.942000 audit[2887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:12.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373664613966383236366638396336306235613336376539646130 Jan 14 01:08:12.942000 audit: BPF prog-id=133 op=UNLOAD Jan 14 01:08:12.942000 audit[2887]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:12.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373664613966383236366638396336306235613336376539646130 Jan 14 01:08:12.942000 audit: BPF prog-id=135 op=LOAD Jan 14 01:08:12.942000 audit[2887]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:12.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638373664613966383236366638396336306235613336376539646130 Jan 14 01:08:13.011000 audit: BPF prog-id=136 op=LOAD Jan 14 01:08:13.012000 audit: BPF prog-id=137 op=LOAD Jan 14 01:08:13.012000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2908 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035666134386235303666636163653639333766343439663235383562 Jan 14 01:08:13.012000 audit: BPF prog-id=137 op=UNLOAD Jan 14 01:08:13.012000 audit[2927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2908 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035666134386235303666636163653639333766343439663235383562 Jan 14 01:08:13.012000 audit: BPF prog-id=138 op=LOAD Jan 14 01:08:13.012000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2908 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035666134386235303666636163653639333766343439663235383562 Jan 14 01:08:13.012000 audit: BPF prog-id=139 op=LOAD Jan 14 01:08:13.012000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2908 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035666134386235303666636163653639333766343439663235383562 Jan 14 01:08:13.012000 audit: BPF prog-id=139 op=UNLOAD Jan 14 01:08:13.012000 audit[2927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2908 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035666134386235303666636163653639333766343439663235383562 Jan 14 01:08:13.012000 audit: BPF prog-id=138 op=UNLOAD Jan 14 01:08:13.012000 audit[2927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2908 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035666134386235303666636163653639333766343439663235383562 Jan 14 01:08:13.012000 audit: BPF prog-id=140 op=LOAD Jan 14 01:08:13.012000 audit[2927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2908 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035666134386235303666636163653639333766343439663235383562 Jan 14 01:08:13.046840 containerd[1618]: time="2026-01-14T01:08:13.046292596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lxvfw,Uid:22e709eb-630e-4b97-a1dc-ef0fad2bd1d3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6876da9f8266f89c60b5a367e9da0d508f77381dd1ba232c9d6e18e12d9c5e0f\"" Jan 14 01:08:13.053031 containerd[1618]: time="2026-01-14T01:08:13.052666564Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:08:13.067012 containerd[1618]: time="2026-01-14T01:08:13.066863129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h7f5b,Uid:6e1822c3-1e07-4494-b478-beb48b6988e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"05fa48b506fcace6937f449f2585b2846a32b470cc8be0290b8cc6d95515a00b\"" Jan 14 01:08:13.067819 kubelet[2817]: E0114 01:08:13.067659 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:13.070600 containerd[1618]: time="2026-01-14T01:08:13.070524751Z" level=info msg="CreateContainer within sandbox \"05fa48b506fcace6937f449f2585b2846a32b470cc8be0290b8cc6d95515a00b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:08:13.092017 containerd[1618]: time="2026-01-14T01:08:13.091874811Z" level=info msg="Container 5c2734ffeccfc01f43ef07ce80a15f50538365ce3693c98abed40b4ebfce075b: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:13.110524 containerd[1618]: time="2026-01-14T01:08:13.110390200Z" level=info msg="CreateContainer within sandbox \"05fa48b506fcace6937f449f2585b2846a32b470cc8be0290b8cc6d95515a00b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c2734ffeccfc01f43ef07ce80a15f50538365ce3693c98abed40b4ebfce075b\"" Jan 14 01:08:13.111392 containerd[1618]: time="2026-01-14T01:08:13.111320158Z" level=info msg="StartContainer for \"5c2734ffeccfc01f43ef07ce80a15f50538365ce3693c98abed40b4ebfce075b\"" Jan 14 01:08:13.113523 containerd[1618]: time="2026-01-14T01:08:13.113114971Z" level=info msg="connecting to shim 5c2734ffeccfc01f43ef07ce80a15f50538365ce3693c98abed40b4ebfce075b" address="unix:///run/containerd/s/729085273fed60476d88cb15e64fcc6abdd9cbcd9f4a1d2bd65d1f42ff7c5f5c" protocol=ttrpc version=3 Jan 14 01:08:13.147180 systemd[1]: Started cri-containerd-5c2734ffeccfc01f43ef07ce80a15f50538365ce3693c98abed40b4ebfce075b.scope - libcontainer container 5c2734ffeccfc01f43ef07ce80a15f50538365ce3693c98abed40b4ebfce075b. Jan 14 01:08:13.237000 audit: BPF prog-id=141 op=LOAD Jan 14 01:08:13.237000 audit[2958]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2908 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563323733346666656363666330316634336566303763653830613135 Jan 14 01:08:13.237000 audit: BPF prog-id=142 op=LOAD Jan 14 01:08:13.237000 audit[2958]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2908 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563323733346666656363666330316634336566303763653830613135 Jan 14 01:08:13.237000 audit: BPF prog-id=142 op=UNLOAD Jan 14 01:08:13.237000 audit[2958]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2908 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563323733346666656363666330316634336566303763653830613135 Jan 14 01:08:13.237000 audit: BPF prog-id=141 op=UNLOAD Jan 14 01:08:13.237000 audit[2958]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2908 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563323733346666656363666330316634336566303763653830613135 Jan 14 01:08:13.237000 audit: BPF prog-id=143 op=LOAD Jan 14 01:08:13.237000 audit[2958]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2908 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.237000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563323733346666656363666330316634336566303763653830613135 Jan 14 01:08:13.272365 containerd[1618]: time="2026-01-14T01:08:13.272243505Z" level=info msg="StartContainer for \"5c2734ffeccfc01f43ef07ce80a15f50538365ce3693c98abed40b4ebfce075b\" returns successfully" Jan 14 01:08:13.633000 audit[3023]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.633000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd99c2ab90 a2=0 a3=7ffd99c2ab7c items=0 ppid=2971 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:08:13.636000 audit[3025]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.636000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa74b64e0 a2=0 a3=7fffa74b64cc items=0 ppid=2971 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.636000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:08:13.636000 audit[3024]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.636000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6eaa1f10 a2=0 a3=7ffe6eaa1efc items=0 ppid=2971 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:08:13.641000 audit[3026]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.641000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffce276ad0 a2=0 a3=7fffce276abc items=0 ppid=2971 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.641000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:08:13.641000 audit[3027]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.641000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff30079260 a2=0 a3=7fff3007924c items=0 ppid=2971 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.641000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:08:13.647000 audit[3029]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.647000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffabef4760 a2=0 a3=7fffabef474c items=0 ppid=2971 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:08:13.742000 audit[3030]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.742000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffb14ed520 a2=0 a3=7fffb14ed50c items=0 ppid=2971 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:08:13.749000 audit[3032]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.749000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe9666c270 a2=0 a3=7ffe9666c25c items=0 ppid=2971 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 01:08:13.758000 audit[3035]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.758000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe80095890 a2=0 a3=7ffe8009587c items=0 ppid=2971 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 01:08:13.761000 audit[3036]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.761000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff401f0160 a2=0 a3=7fff401f014c items=0 ppid=2971 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.761000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:08:13.766000 audit[3038]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.766000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcd955a100 a2=0 a3=7ffcd955a0ec items=0 ppid=2971 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:08:13.769000 audit[3039]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.769000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8d1ccb10 a2=0 a3=7ffc8d1ccafc items=0 ppid=2971 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:08:13.775000 audit[3041]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.775000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffefb90cc20 a2=0 a3=7ffefb90cc0c items=0 ppid=2971 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.775000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:08:13.784000 audit[3044]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.784000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc3ef9bc30 a2=0 a3=7ffc3ef9bc1c items=0 ppid=2971 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:08:13.787877 kubelet[2817]: E0114 01:08:13.787710 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:13.786000 audit[3045]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.786000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc984a420 a2=0 a3=7fffc984a40c items=0 ppid=2971 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:08:13.793000 audit[3047]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.793000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe92e4d530 a2=0 a3=7ffe92e4d51c items=0 ppid=2971 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.793000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:08:13.795000 audit[3048]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.795000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe27b84400 a2=0 a3=7ffe27b843ec items=0 ppid=2971 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:08:13.801000 audit[3050]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.801000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4e951290 a2=0 a3=7ffe4e95127c items=0 ppid=2971 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:08:13.816000 audit[3053]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.816000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd9958eb00 a2=0 a3=7ffd9958eaec items=0 ppid=2971 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:08:13.828000 audit[3056]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.828000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef82a20e0 a2=0 a3=7ffef82a20cc items=0 ppid=2971 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:08:13.832000 audit[3057]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.832000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd683158c0 a2=0 a3=7ffd683158ac items=0 ppid=2971 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:08:13.839000 audit[3059]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.839000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff911da180 a2=0 a3=7fff911da16c items=0 ppid=2971 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:08:13.849000 audit[3062]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.849000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe94288190 a2=0 a3=7ffe9428817c items=0 ppid=2971 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:08:13.852000 audit[3063]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.852000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd2048e20 a2=0 a3=7ffcd2048e0c items=0 ppid=2971 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:08:13.861000 audit[3065]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:08:13.861000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdb70aa4b0 a2=0 a3=7ffdb70aa49c items=0 ppid=2971 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.861000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:08:13.904000 audit[3071]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:13.904000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeed5139f0 a2=0 a3=7ffeed5139dc items=0 ppid=2971 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:13.922000 audit[3071]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:13.922000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffeed5139f0 a2=0 a3=7ffeed5139dc items=0 ppid=2971 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:13.926000 audit[3076]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.926000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe32788960 a2=0 a3=7ffe3278894c items=0 ppid=2971 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.926000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:08:13.932000 audit[3078]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.932000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd10e81090 a2=0 a3=7ffd10e8107c items=0 ppid=2971 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 01:08:13.942000 audit[3081]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.942000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffafc5bf60 a2=0 a3=7fffafc5bf4c items=0 ppid=2971 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 01:08:13.945000 audit[3082]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.945000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb1763ea0 a2=0 a3=7ffeb1763e8c items=0 ppid=2971 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:08:13.951000 audit[3084]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.951000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeedb303f0 a2=0 a3=7ffeedb303dc items=0 ppid=2971 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:08:13.953000 audit[3085]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.953000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda31960e0 a2=0 a3=7ffda31960cc items=0 ppid=2971 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:08:13.961000 audit[3087]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.961000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffec632cb60 a2=0 a3=7ffec632cb4c items=0 ppid=2971 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 01:08:13.969000 audit[3090]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.969000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff82338ef0 a2=0 a3=7fff82338edc items=0 ppid=2971 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:08:13.971000 audit[3091]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.971000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff11caee10 a2=0 a3=7fff11caedfc items=0 ppid=2971 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.971000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:08:13.977000 audit[3093]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.977000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffee6c1fc70 a2=0 a3=7ffee6c1fc5c items=0 ppid=2971 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:08:13.980000 audit[3094]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.980000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4f2a97b0 a2=0 a3=7ffd4f2a979c items=0 ppid=2971 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.980000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:08:13.990000 audit[3096]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.990000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1af61650 a2=0 a3=7fff1af6163c items=0 ppid=2971 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:08:13.999000 audit[3099]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:13.999000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff8dd30e30 a2=0 a3=7fff8dd30e1c items=0 ppid=2971 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:13.999000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:08:14.008000 audit[3102]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:14.008000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc720f6900 a2=0 a3=7ffc720f68ec items=0 ppid=2971 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:14.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 01:08:14.011000 audit[3103]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:14.011000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff258d5a50 a2=0 a3=7fff258d5a3c items=0 ppid=2971 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:14.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:08:14.019000 audit[3105]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:14.019000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe1b7a8370 a2=0 a3=7ffe1b7a835c items=0 ppid=2971 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:14.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:08:14.027000 audit[3108]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:14.027000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe55519b60 a2=0 a3=7ffe55519b4c items=0 ppid=2971 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:14.027000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:08:14.030000 audit[3109]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:14.030000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec1760660 a2=0 a3=7ffec176064c items=0 ppid=2971 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:14.030000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:08:14.035000 audit[3111]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:14.035000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc3dceefc0 a2=0 a3=7ffc3dceefac items=0 ppid=2971 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:14.035000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:08:14.038000 audit[3112]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:14.038000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1ee1f240 a2=0 a3=7ffc1ee1f22c items=0 ppid=2971 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:14.038000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:08:14.043000 audit[3114]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:14.043000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffce46f4bf0 a2=0 a3=7ffce46f4bdc items=0 ppid=2971 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:14.043000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:08:14.047818 kubelet[2817]: E0114 01:08:14.047772 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:14.054000 audit[3117]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:08:14.054000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc20dc6b00 a2=0 a3=7ffc20dc6aec items=0 ppid=2971 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:14.054000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:08:14.062000 audit[3119]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:08:14.062000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff3042d4d0 a2=0 a3=7fff3042d4bc items=0 ppid=2971 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:14.062000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:14.063000 audit[3119]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:08:14.063000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff3042d4d0 a2=0 a3=7fff3042d4bc items=0 ppid=2971 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:14.063000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:14.077028 kubelet[2817]: I0114 01:08:14.076610 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h7f5b" podStartSLOduration=3.076588525 podStartE2EDuration="3.076588525s" podCreationTimestamp="2026-01-14 01:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:08:13.80801248 +0000 UTC m=+6.235412334" watchObservedRunningTime="2026-01-14 01:08:14.076588525 +0000 UTC m=+6.503988379" Jan 14 01:08:14.669626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025723438.mount: Deactivated successfully. Jan 14 01:08:14.795574 kubelet[2817]: E0114 01:08:14.795054 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:15.196008 kubelet[2817]: E0114 01:08:15.195560 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:15.808958 kubelet[2817]: E0114 01:08:15.806786 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:16.654330 containerd[1618]: time="2026-01-14T01:08:16.653383892Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:16.658798 containerd[1618]: time="2026-01-14T01:08:16.658529030Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 01:08:16.663062 containerd[1618]: time="2026-01-14T01:08:16.662564035Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:16.668838 containerd[1618]: time="2026-01-14T01:08:16.668758142Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:16.671033 containerd[1618]: time="2026-01-14T01:08:16.670195570Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.617449822s" Jan 14 01:08:16.671033 containerd[1618]: time="2026-01-14T01:08:16.670235233Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:08:16.680683 containerd[1618]: time="2026-01-14T01:08:16.680208332Z" level=info msg="CreateContainer within sandbox \"6876da9f8266f89c60b5a367e9da0d508f77381dd1ba232c9d6e18e12d9c5e0f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:08:16.773015 containerd[1618]: time="2026-01-14T01:08:16.772552446Z" level=info msg="Container ccf7b2054eb87df4475f97709c214f51b5506a67abacf28e66e409e86da1a41e: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:16.833782 containerd[1618]: time="2026-01-14T01:08:16.833403624Z" level=info msg="CreateContainer within sandbox \"6876da9f8266f89c60b5a367e9da0d508f77381dd1ba232c9d6e18e12d9c5e0f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ccf7b2054eb87df4475f97709c214f51b5506a67abacf28e66e409e86da1a41e\"" Jan 14 01:08:16.835972 containerd[1618]: time="2026-01-14T01:08:16.835750904Z" level=info msg="StartContainer for \"ccf7b2054eb87df4475f97709c214f51b5506a67abacf28e66e409e86da1a41e\"" Jan 14 01:08:16.846014 kubelet[2817]: E0114 01:08:16.844117 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:16.861688 containerd[1618]: time="2026-01-14T01:08:16.856749174Z" level=info msg="connecting to shim ccf7b2054eb87df4475f97709c214f51b5506a67abacf28e66e409e86da1a41e" address="unix:///run/containerd/s/011748a168335e10c0c9715bbbc6c6556ad28d14db3c307ecc6e139b97d4a0e4" protocol=ttrpc version=3 Jan 14 01:08:16.962307 systemd[1]: Started cri-containerd-ccf7b2054eb87df4475f97709c214f51b5506a67abacf28e66e409e86da1a41e.scope - libcontainer container ccf7b2054eb87df4475f97709c214f51b5506a67abacf28e66e409e86da1a41e. Jan 14 01:08:17.049000 audit: BPF prog-id=144 op=LOAD Jan 14 01:08:17.058000 audit: BPF prog-id=145 op=LOAD Jan 14 01:08:17.058000 audit[3129]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2876 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:17.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363663762323035346562383764663434373566393737303963323134 Jan 14 01:08:17.058000 audit: BPF prog-id=145 op=UNLOAD Jan 14 01:08:17.058000 audit[3129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:17.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363663762323035346562383764663434373566393737303963323134 Jan 14 01:08:17.060000 audit: BPF prog-id=146 op=LOAD Jan 14 01:08:17.060000 audit[3129]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2876 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:17.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363663762323035346562383764663434373566393737303963323134 Jan 14 01:08:17.060000 audit: BPF prog-id=147 op=LOAD Jan 14 01:08:17.060000 audit[3129]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2876 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:17.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363663762323035346562383764663434373566393737303963323134 Jan 14 01:08:17.060000 audit: BPF prog-id=147 op=UNLOAD Jan 14 01:08:17.060000 audit[3129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:17.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363663762323035346562383764663434373566393737303963323134 Jan 14 01:08:17.060000 audit: BPF prog-id=146 op=UNLOAD Jan 14 01:08:17.060000 audit[3129]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:17.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363663762323035346562383764663434373566393737303963323134 Jan 14 01:08:17.060000 audit: BPF prog-id=148 op=LOAD Jan 14 01:08:17.060000 audit[3129]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2876 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:17.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363663762323035346562383764663434373566393737303963323134 Jan 14 01:08:17.153152 containerd[1618]: time="2026-01-14T01:08:17.148770655Z" level=info msg="StartContainer for \"ccf7b2054eb87df4475f97709c214f51b5506a67abacf28e66e409e86da1a41e\" returns successfully" Jan 14 01:08:17.953421 kubelet[2817]: I0114 01:08:17.952280 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-lxvfw" podStartSLOduration=2.328023113 podStartE2EDuration="5.952258571s" podCreationTimestamp="2026-01-14 01:08:12 +0000 UTC" firstStartedPulling="2026-01-14 01:08:13.050605909 +0000 UTC m=+5.478005763" lastFinishedPulling="2026-01-14 01:08:16.674841367 +0000 UTC m=+9.102241221" observedRunningTime="2026-01-14 01:08:17.931787183 +0000 UTC m=+10.359187038" watchObservedRunningTime="2026-01-14 01:08:17.952258571 +0000 UTC m=+10.379658435" Jan 14 01:08:19.086197 kubelet[2817]: E0114 01:08:19.085437 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:19.961361 kubelet[2817]: E0114 01:08:19.960876 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:30.509000 audit[1828]: USER_END pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:08:30.511395 sudo[1828]: pam_unix(sudo:session): session closed for user root Jan 14 01:08:30.514363 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 01:08:30.514534 kernel: audit: type=1106 audit(1768352910.509:512): pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:08:30.528000 audit[1828]: CRED_DISP pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:08:30.544027 kernel: audit: type=1104 audit(1768352910.528:513): pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:08:30.550055 sshd[1827]: Connection closed by 10.0.0.1 port 56494 Jan 14 01:08:30.551138 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Jan 14 01:08:30.552000 audit[1823]: USER_END pid=1823 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:08:30.558761 systemd-logind[1593]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:08:30.562592 systemd[1]: sshd@6-10.0.0.105:22-10.0.0.1:56494.service: Deactivated successfully. Jan 14 01:08:30.584823 kernel: audit: type=1106 audit(1768352910.552:514): pid=1823 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:08:30.585048 kernel: audit: type=1104 audit(1768352910.552:515): pid=1823 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:08:30.552000 audit[1823]: CRED_DISP pid=1823 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:08:30.573015 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:08:30.573393 systemd[1]: session-8.scope: Consumed 6.555s CPU time, 219M memory peak. Jan 14 01:08:30.580879 systemd-logind[1593]: Removed session 8. Jan 14 01:08:30.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.105:22-10.0.0.1:56494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:30.600042 kernel: audit: type=1131 audit(1768352910.562:516): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.105:22-10.0.0.1:56494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:08:31.405000 audit[3224]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:31.405000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcf66eac80 a2=0 a3=7ffcf66eac6c items=0 ppid=2971 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.443483 kernel: audit: type=1325 audit(1768352911.405:517): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:31.443600 kernel: audit: type=1300 audit(1768352911.405:517): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcf66eac80 a2=0 a3=7ffcf66eac6c items=0 ppid=2971 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:31.465238 kernel: audit: type=1327 audit(1768352911.405:517): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:31.465338 kernel: audit: type=1325 audit(1768352911.444:518): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:31.444000 audit[3224]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:31.480058 kernel: audit: type=1300 audit(1768352911.444:518): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf66eac80 a2=0 a3=0 items=0 ppid=2971 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.444000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf66eac80 a2=0 a3=0 items=0 ppid=2971 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.444000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:31.498000 audit[3226]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:31.498000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffef8505290 a2=0 a3=7ffef850527c items=0 ppid=2971 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:31.518000 audit[3226]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:31.518000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef8505290 a2=0 a3=0 items=0 ppid=2971 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:31.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:36.144280 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:08:36.144520 kernel: audit: type=1325 audit(1768352916.122:521): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:36.122000 audit[3228]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:36.155163 kernel: audit: type=1300 audit(1768352916.122:521): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc1fd737a0 a2=0 a3=7ffc1fd7378c items=0 ppid=2971 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.122000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc1fd737a0 a2=0 a3=7ffc1fd7378c items=0 ppid=2971 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.122000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:36.221805 kernel: audit: type=1327 audit(1768352916.122:521): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:36.222109 kernel: audit: type=1325 audit(1768352916.205:522): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:36.205000 audit[3228]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:36.205000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1fd737a0 a2=0 a3=0 items=0 ppid=2971 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.205000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:36.257303 kernel: audit: type=1300 audit(1768352916.205:522): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1fd737a0 a2=0 a3=0 items=0 ppid=2971 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.257377 kernel: audit: type=1327 audit(1768352916.205:522): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:36.288000 audit[3230]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:36.307079 kernel: audit: type=1325 audit(1768352916.288:523): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:36.307208 kernel: audit: type=1300 audit(1768352916.288:523): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcc0c7c520 a2=0 a3=7ffcc0c7c50c items=0 ppid=2971 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.288000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcc0c7c520 a2=0 a3=7ffcc0c7c50c items=0 ppid=2971 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:36.361545 kernel: audit: type=1327 audit(1768352916.288:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:36.365618 kernel: audit: type=1325 audit(1768352916.312:524): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:36.312000 audit[3230]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:36.312000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc0c7c520 a2=0 a3=0 items=0 ppid=2971 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:36.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:37.400000 audit[3232]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:37.400000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdb887f2c0 a2=0 a3=7ffdb887f2ac items=0 ppid=2971 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:37.400000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:37.408000 audit[3232]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:37.408000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdb887f2c0 a2=0 a3=0 items=0 ppid=2971 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:37.408000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:39.061000 audit[3234]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:39.061000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd3cee2790 a2=0 a3=7ffd3cee277c items=0 ppid=2971 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:39.061000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:39.078000 audit[3234]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:39.078000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd3cee2790 a2=0 a3=0 items=0 ppid=2971 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:39.078000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:39.099000 audit[3236]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:39.099000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff9aef7660 a2=0 a3=7fff9aef764c items=0 ppid=2971 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:39.099000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:39.113000 audit[3236]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3236 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:39.113000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff9aef7660 a2=0 a3=0 items=0 ppid=2971 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:39.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:39.213106 systemd[1]: Created slice kubepods-besteffort-pod34e849e7_cdaa_405d_9b92_75b8afae025d.slice - libcontainer container kubepods-besteffort-pod34e849e7_cdaa_405d_9b92_75b8afae025d.slice. Jan 14 01:08:39.240979 kubelet[2817]: I0114 01:08:39.240504 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mq8k\" (UniqueName: \"kubernetes.io/projected/34e849e7-cdaa-405d-9b92-75b8afae025d-kube-api-access-7mq8k\") pod \"calico-typha-68df746c86-vngpk\" (UID: \"34e849e7-cdaa-405d-9b92-75b8afae025d\") " pod="calico-system/calico-typha-68df746c86-vngpk" Jan 14 01:08:39.240979 kubelet[2817]: I0114 01:08:39.240673 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/34e849e7-cdaa-405d-9b92-75b8afae025d-typha-certs\") pod \"calico-typha-68df746c86-vngpk\" (UID: \"34e849e7-cdaa-405d-9b92-75b8afae025d\") " pod="calico-system/calico-typha-68df746c86-vngpk" Jan 14 01:08:39.240979 kubelet[2817]: I0114 01:08:39.240730 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34e849e7-cdaa-405d-9b92-75b8afae025d-tigera-ca-bundle\") pod \"calico-typha-68df746c86-vngpk\" (UID: \"34e849e7-cdaa-405d-9b92-75b8afae025d\") " pod="calico-system/calico-typha-68df746c86-vngpk" Jan 14 01:08:39.437102 systemd[1]: Created slice kubepods-besteffort-pod8c2c4953_7e54_4087_a856_344916ac5e18.slice - libcontainer container kubepods-besteffort-pod8c2c4953_7e54_4087_a856_344916ac5e18.slice. Jan 14 01:08:39.533694 kubelet[2817]: E0114 01:08:39.533651 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:39.537680 containerd[1618]: time="2026-01-14T01:08:39.537039601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68df746c86-vngpk,Uid:34e849e7-cdaa-405d-9b92-75b8afae025d,Namespace:calico-system,Attempt:0,}" Jan 14 01:08:39.544814 kubelet[2817]: I0114 01:08:39.544765 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8c2c4953-7e54-4087-a856-344916ac5e18-flexvol-driver-host\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.545003 kubelet[2817]: I0114 01:08:39.544829 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2c4953-7e54-4087-a856-344916ac5e18-tigera-ca-bundle\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.545003 kubelet[2817]: I0114 01:08:39.544861 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8c2c4953-7e54-4087-a856-344916ac5e18-policysync\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.545003 kubelet[2817]: I0114 01:08:39.544977 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wcfw\" (UniqueName: \"kubernetes.io/projected/8c2c4953-7e54-4087-a856-344916ac5e18-kube-api-access-9wcfw\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.545123 kubelet[2817]: I0114 01:08:39.545045 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8c2c4953-7e54-4087-a856-344916ac5e18-cni-log-dir\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.545123 kubelet[2817]: I0114 01:08:39.545071 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8c2c4953-7e54-4087-a856-344916ac5e18-var-run-calico\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.545123 kubelet[2817]: I0114 01:08:39.545096 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c2c4953-7e54-4087-a856-344916ac5e18-var-lib-calico\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.545123 kubelet[2817]: I0114 01:08:39.545117 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8c2c4953-7e54-4087-a856-344916ac5e18-cni-bin-dir\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.545297 kubelet[2817]: I0114 01:08:39.545136 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c2c4953-7e54-4087-a856-344916ac5e18-lib-modules\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.545297 kubelet[2817]: I0114 01:08:39.545156 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8c2c4953-7e54-4087-a856-344916ac5e18-node-certs\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.545297 kubelet[2817]: I0114 01:08:39.545174 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c2c4953-7e54-4087-a856-344916ac5e18-xtables-lock\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.545297 kubelet[2817]: I0114 01:08:39.545194 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8c2c4953-7e54-4087-a856-344916ac5e18-cni-net-dir\") pod \"calico-node-ccsn7\" (UID: \"8c2c4953-7e54-4087-a856-344916ac5e18\") " pod="calico-system/calico-node-ccsn7" Jan 14 01:08:39.589005 kubelet[2817]: E0114 01:08:39.587841 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:08:39.645606 kubelet[2817]: I0114 01:08:39.645545 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ba6f7f37-698f-4697-a408-a3efabbcf48e-kubelet-dir\") pod \"csi-node-driver-q8jc6\" (UID: \"ba6f7f37-698f-4697-a408-a3efabbcf48e\") " pod="calico-system/csi-node-driver-q8jc6" Jan 14 01:08:39.645725 kubelet[2817]: I0114 01:08:39.645628 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ba6f7f37-698f-4697-a408-a3efabbcf48e-registration-dir\") pod \"csi-node-driver-q8jc6\" (UID: \"ba6f7f37-698f-4697-a408-a3efabbcf48e\") " pod="calico-system/csi-node-driver-q8jc6" Jan 14 01:08:39.645725 kubelet[2817]: I0114 01:08:39.645702 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ba6f7f37-698f-4697-a408-a3efabbcf48e-socket-dir\") pod \"csi-node-driver-q8jc6\" (UID: \"ba6f7f37-698f-4697-a408-a3efabbcf48e\") " pod="calico-system/csi-node-driver-q8jc6" Jan 14 01:08:39.645725 kubelet[2817]: I0114 01:08:39.645719 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g5n9\" (UniqueName: \"kubernetes.io/projected/ba6f7f37-698f-4697-a408-a3efabbcf48e-kube-api-access-2g5n9\") pod \"csi-node-driver-q8jc6\" (UID: \"ba6f7f37-698f-4697-a408-a3efabbcf48e\") " pod="calico-system/csi-node-driver-q8jc6" Jan 14 01:08:39.645796 kubelet[2817]: I0114 01:08:39.645749 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ba6f7f37-698f-4697-a408-a3efabbcf48e-varrun\") pod \"csi-node-driver-q8jc6\" (UID: \"ba6f7f37-698f-4697-a408-a3efabbcf48e\") " pod="calico-system/csi-node-driver-q8jc6" Jan 14 01:08:39.661063 kubelet[2817]: E0114 01:08:39.658316 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.661063 kubelet[2817]: W0114 01:08:39.659209 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.661063 kubelet[2817]: E0114 01:08:39.659309 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.661578 kubelet[2817]: E0114 01:08:39.661500 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.661639 kubelet[2817]: W0114 01:08:39.661629 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.662765 kubelet[2817]: E0114 01:08:39.661648 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.665952 kubelet[2817]: E0114 01:08:39.665665 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.665952 kubelet[2817]: W0114 01:08:39.665701 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.665952 kubelet[2817]: E0114 01:08:39.665715 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.667864 kubelet[2817]: E0114 01:08:39.667804 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.667864 kubelet[2817]: W0114 01:08:39.667817 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.667864 kubelet[2817]: E0114 01:08:39.667832 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.676807 kubelet[2817]: E0114 01:08:39.676736 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.676807 kubelet[2817]: W0114 01:08:39.676796 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.677246 kubelet[2817]: E0114 01:08:39.676816 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.691804 containerd[1618]: time="2026-01-14T01:08:39.691467858Z" level=info msg="connecting to shim 0b5f04650d8085540f9716d8b9cbfa250bb93affc52bf1ef8cf299579356fdc1" address="unix:///run/containerd/s/644adb49e9e8bd435b02d82fa022cfb3fc9aa9b6cdb1d9ec3dffa321e3234664" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:39.748676 kubelet[2817]: E0114 01:08:39.747604 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.748676 kubelet[2817]: W0114 01:08:39.747662 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.748676 kubelet[2817]: E0114 01:08:39.747685 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.748676 kubelet[2817]: E0114 01:08:39.748012 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.748676 kubelet[2817]: W0114 01:08:39.748023 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.748676 kubelet[2817]: E0114 01:08:39.748036 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.748676 kubelet[2817]: E0114 01:08:39.748254 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.748676 kubelet[2817]: W0114 01:08:39.748263 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.748676 kubelet[2817]: E0114 01:08:39.748274 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.748676 kubelet[2817]: E0114 01:08:39.748602 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.749207 kubelet[2817]: W0114 01:08:39.748614 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.749207 kubelet[2817]: E0114 01:08:39.748627 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.749207 kubelet[2817]: E0114 01:08:39.749024 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.749207 kubelet[2817]: W0114 01:08:39.749035 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.749207 kubelet[2817]: E0114 01:08:39.749049 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.749363 kubelet[2817]: E0114 01:08:39.749310 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.749363 kubelet[2817]: W0114 01:08:39.749320 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.749363 kubelet[2817]: E0114 01:08:39.749331 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.752956 kubelet[2817]: E0114 01:08:39.749608 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.752956 kubelet[2817]: W0114 01:08:39.749621 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.752956 kubelet[2817]: E0114 01:08:39.749633 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.752956 kubelet[2817]: E0114 01:08:39.749841 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.752956 kubelet[2817]: W0114 01:08:39.749850 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.752956 kubelet[2817]: E0114 01:08:39.749863 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.752956 kubelet[2817]: E0114 01:08:39.750288 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.752956 kubelet[2817]: W0114 01:08:39.750300 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.752956 kubelet[2817]: E0114 01:08:39.750311 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.752956 kubelet[2817]: E0114 01:08:39.750600 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.753517 kubelet[2817]: W0114 01:08:39.750612 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.753517 kubelet[2817]: E0114 01:08:39.750625 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.753517 kubelet[2817]: E0114 01:08:39.750825 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.753517 kubelet[2817]: W0114 01:08:39.750835 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.753517 kubelet[2817]: E0114 01:08:39.750846 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.753517 kubelet[2817]: E0114 01:08:39.751153 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.753517 kubelet[2817]: W0114 01:08:39.751163 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.753517 kubelet[2817]: E0114 01:08:39.751174 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.753517 kubelet[2817]: E0114 01:08:39.751464 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.753517 kubelet[2817]: W0114 01:08:39.751474 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.755817 kubelet[2817]: E0114 01:08:39.751484 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.755817 kubelet[2817]: E0114 01:08:39.751686 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.755817 kubelet[2817]: W0114 01:08:39.751696 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.755817 kubelet[2817]: E0114 01:08:39.751706 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.755817 kubelet[2817]: E0114 01:08:39.752025 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.755817 kubelet[2817]: W0114 01:08:39.752039 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.755817 kubelet[2817]: E0114 01:08:39.752050 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.755817 kubelet[2817]: E0114 01:08:39.752286 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.755817 kubelet[2817]: W0114 01:08:39.752297 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.755817 kubelet[2817]: E0114 01:08:39.752312 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.756335 kubelet[2817]: E0114 01:08:39.752625 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.756335 kubelet[2817]: W0114 01:08:39.752638 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.756335 kubelet[2817]: E0114 01:08:39.752651 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.756335 kubelet[2817]: E0114 01:08:39.752951 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.756335 kubelet[2817]: W0114 01:08:39.752964 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.756335 kubelet[2817]: E0114 01:08:39.752977 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.756335 kubelet[2817]: E0114 01:08:39.753197 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.756335 kubelet[2817]: W0114 01:08:39.753208 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.756335 kubelet[2817]: E0114 01:08:39.753219 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.756335 kubelet[2817]: E0114 01:08:39.753481 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.757288 kubelet[2817]: W0114 01:08:39.753493 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.757288 kubelet[2817]: E0114 01:08:39.753504 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.764258 kubelet[2817]: E0114 01:08:39.764178 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.764943 kubelet[2817]: W0114 01:08:39.764607 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.764943 kubelet[2817]: E0114 01:08:39.764628 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.768678 kubelet[2817]: E0114 01:08:39.766322 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.768678 kubelet[2817]: W0114 01:08:39.767418 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.768678 kubelet[2817]: E0114 01:08:39.767442 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.768678 kubelet[2817]: E0114 01:08:39.767713 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.768678 kubelet[2817]: W0114 01:08:39.767724 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.768678 kubelet[2817]: E0114 01:08:39.767740 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.768678 kubelet[2817]: E0114 01:08:39.768097 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.768678 kubelet[2817]: W0114 01:08:39.768106 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.768678 kubelet[2817]: E0114 01:08:39.768115 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.768678 kubelet[2817]: E0114 01:08:39.768484 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.769048 kubelet[2817]: W0114 01:08:39.768494 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.769048 kubelet[2817]: E0114 01:08:39.768503 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.769048 kubelet[2817]: E0114 01:08:39.768782 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.769048 kubelet[2817]: W0114 01:08:39.768790 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.769048 kubelet[2817]: E0114 01:08:39.768798 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.830144 systemd[1]: Started cri-containerd-0b5f04650d8085540f9716d8b9cbfa250bb93affc52bf1ef8cf299579356fdc1.scope - libcontainer container 0b5f04650d8085540f9716d8b9cbfa250bb93affc52bf1ef8cf299579356fdc1. Jan 14 01:08:39.887433 kubelet[2817]: E0114 01:08:39.885709 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:39.887433 kubelet[2817]: W0114 01:08:39.885768 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:39.887433 kubelet[2817]: E0114 01:08:39.885807 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:39.890000 audit: BPF prog-id=149 op=LOAD Jan 14 01:08:39.895000 audit: BPF prog-id=150 op=LOAD Jan 14 01:08:39.895000 audit[3272]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3254 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:39.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062356630343635306438303835353430663937313664386239636266 Jan 14 01:08:39.895000 audit: BPF prog-id=150 op=UNLOAD Jan 14 01:08:39.895000 audit[3272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:39.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062356630343635306438303835353430663937313664386239636266 Jan 14 01:08:39.895000 audit: BPF prog-id=151 op=LOAD Jan 14 01:08:39.895000 audit[3272]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3254 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:39.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062356630343635306438303835353430663937313664386239636266 Jan 14 01:08:39.896000 audit: BPF prog-id=152 op=LOAD Jan 14 01:08:39.896000 audit[3272]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3254 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:39.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062356630343635306438303835353430663937313664386239636266 Jan 14 01:08:39.896000 audit: BPF prog-id=152 op=UNLOAD Jan 14 01:08:39.896000 audit[3272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:39.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062356630343635306438303835353430663937313664386239636266 Jan 14 01:08:39.896000 audit: BPF prog-id=151 op=UNLOAD Jan 14 01:08:39.896000 audit[3272]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:39.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062356630343635306438303835353430663937313664386239636266 Jan 14 01:08:39.896000 audit: BPF prog-id=153 op=LOAD Jan 14 01:08:39.896000 audit[3272]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3254 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:39.896000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062356630343635306438303835353430663937313664386239636266 Jan 14 01:08:40.014726 containerd[1618]: time="2026-01-14T01:08:40.014344346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68df746c86-vngpk,Uid:34e849e7-cdaa-405d-9b92-75b8afae025d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b5f04650d8085540f9716d8b9cbfa250bb93affc52bf1ef8cf299579356fdc1\"" Jan 14 01:08:40.016117 kubelet[2817]: E0114 01:08:40.015850 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:40.034365 containerd[1618]: time="2026-01-14T01:08:40.034289188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:08:40.044329 kubelet[2817]: E0114 01:08:40.044282 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:40.045590 containerd[1618]: time="2026-01-14T01:08:40.045538594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ccsn7,Uid:8c2c4953-7e54-4087-a856-344916ac5e18,Namespace:calico-system,Attempt:0,}" Jan 14 01:08:40.147224 containerd[1618]: time="2026-01-14T01:08:40.147088670Z" level=info msg="connecting to shim 549cfb71c9635d6b0a390ec41baac6798d91f25ceec43afe80bf428fe6c9935e" address="unix:///run/containerd/s/1d419724286ef7d944f39c0094d6988ff6e5fe7111fc8b452b735fa11a325900" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:08:40.149000 audit[3338]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:40.149000 audit[3338]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd2b860870 a2=0 a3=7ffd2b86085c items=0 ppid=2971 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:40.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:40.156000 audit[3338]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:40.156000 audit[3338]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd2b860870 a2=0 a3=0 items=0 ppid=2971 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:40.156000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:40.224258 systemd[1]: Started cri-containerd-549cfb71c9635d6b0a390ec41baac6798d91f25ceec43afe80bf428fe6c9935e.scope - libcontainer container 549cfb71c9635d6b0a390ec41baac6798d91f25ceec43afe80bf428fe6c9935e. Jan 14 01:08:40.262000 audit: BPF prog-id=154 op=LOAD Jan 14 01:08:40.263000 audit: BPF prog-id=155 op=LOAD Jan 14 01:08:40.263000 audit[3349]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:40.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396366623731633936333564366230613339306563343162616163 Jan 14 01:08:40.263000 audit: BPF prog-id=155 op=UNLOAD Jan 14 01:08:40.263000 audit[3349]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:40.263000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396366623731633936333564366230613339306563343162616163 Jan 14 01:08:40.264000 audit: BPF prog-id=156 op=LOAD Jan 14 01:08:40.264000 audit[3349]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:40.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396366623731633936333564366230613339306563343162616163 Jan 14 01:08:40.264000 audit: BPF prog-id=157 op=LOAD Jan 14 01:08:40.264000 audit[3349]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:40.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396366623731633936333564366230613339306563343162616163 Jan 14 01:08:40.264000 audit: BPF prog-id=157 op=UNLOAD Jan 14 01:08:40.264000 audit[3349]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:40.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396366623731633936333564366230613339306563343162616163 Jan 14 01:08:40.264000 audit: BPF prog-id=156 op=UNLOAD Jan 14 01:08:40.264000 audit[3349]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:40.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396366623731633936333564366230613339306563343162616163 Jan 14 01:08:40.264000 audit: BPF prog-id=158 op=LOAD Jan 14 01:08:40.264000 audit[3349]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:40.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534396366623731633936333564366230613339306563343162616163 Jan 14 01:08:40.316126 containerd[1618]: time="2026-01-14T01:08:40.316036415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ccsn7,Uid:8c2c4953-7e54-4087-a856-344916ac5e18,Namespace:calico-system,Attempt:0,} returns sandbox id \"549cfb71c9635d6b0a390ec41baac6798d91f25ceec43afe80bf428fe6c9935e\"" Jan 14 01:08:40.319464 kubelet[2817]: E0114 01:08:40.318602 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:40.732675 kubelet[2817]: E0114 01:08:40.732002 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:08:41.169372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958452980.mount: Deactivated successfully. Jan 14 01:08:42.731946 kubelet[2817]: E0114 01:08:42.731215 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:08:43.594193 containerd[1618]: time="2026-01-14T01:08:43.593113074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:43.597674 containerd[1618]: time="2026-01-14T01:08:43.597591203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 01:08:43.600681 containerd[1618]: time="2026-01-14T01:08:43.600538291Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:43.607329 containerd[1618]: time="2026-01-14T01:08:43.607236836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:43.611322 containerd[1618]: time="2026-01-14T01:08:43.610557159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.576223096s" Jan 14 01:08:43.611322 containerd[1618]: time="2026-01-14T01:08:43.610594739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:08:43.612669 containerd[1618]: time="2026-01-14T01:08:43.612644156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:08:43.662528 containerd[1618]: time="2026-01-14T01:08:43.662370718Z" level=info msg="CreateContainer within sandbox \"0b5f04650d8085540f9716d8b9cbfa250bb93affc52bf1ef8cf299579356fdc1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:08:43.707969 containerd[1618]: time="2026-01-14T01:08:43.706078537Z" level=info msg="Container d176907817c44a159f1bcd670293ccce7012c560fa34b1b3ffcf50e09f5b5696: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:43.749227 containerd[1618]: time="2026-01-14T01:08:43.749126664Z" level=info msg="CreateContainer within sandbox \"0b5f04650d8085540f9716d8b9cbfa250bb93affc52bf1ef8cf299579356fdc1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d176907817c44a159f1bcd670293ccce7012c560fa34b1b3ffcf50e09f5b5696\"" Jan 14 01:08:43.750239 containerd[1618]: time="2026-01-14T01:08:43.750158996Z" level=info msg="StartContainer for \"d176907817c44a159f1bcd670293ccce7012c560fa34b1b3ffcf50e09f5b5696\"" Jan 14 01:08:43.759292 containerd[1618]: time="2026-01-14T01:08:43.758676879Z" level=info msg="connecting to shim d176907817c44a159f1bcd670293ccce7012c560fa34b1b3ffcf50e09f5b5696" address="unix:///run/containerd/s/644adb49e9e8bd435b02d82fa022cfb3fc9aa9b6cdb1d9ec3dffa321e3234664" protocol=ttrpc version=3 Jan 14 01:08:43.848685 systemd[1]: Started cri-containerd-d176907817c44a159f1bcd670293ccce7012c560fa34b1b3ffcf50e09f5b5696.scope - libcontainer container d176907817c44a159f1bcd670293ccce7012c560fa34b1b3ffcf50e09f5b5696. Jan 14 01:08:43.946000 audit: BPF prog-id=159 op=LOAD Jan 14 01:08:43.956436 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 14 01:08:43.956525 kernel: audit: type=1334 audit(1768352923.946:549): prog-id=159 op=LOAD Jan 14 01:08:43.949000 audit: BPF prog-id=160 op=LOAD Jan 14 01:08:43.963366 kernel: audit: type=1334 audit(1768352923.949:550): prog-id=160 op=LOAD Jan 14 01:08:43.963483 kernel: audit: type=1300 audit(1768352923.949:550): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3254 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:43.949000 audit[3387]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3254 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:43.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431373639303738313763343461313539663162636436373032393363 Jan 14 01:08:44.005955 kernel: audit: type=1327 audit(1768352923.949:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431373639303738313763343461313539663162636436373032393363 Jan 14 01:08:44.007002 kernel: audit: type=1334 audit(1768352923.949:551): prog-id=160 op=UNLOAD Jan 14 01:08:43.949000 audit: BPF prog-id=160 op=UNLOAD Jan 14 01:08:43.949000 audit[3387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.030083 kernel: audit: type=1300 audit(1768352923.949:551): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.043367 kernel: audit: type=1327 audit(1768352923.949:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431373639303738313763343461313539663162636436373032393363 Jan 14 01:08:43.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431373639303738313763343461313539663162636436373032393363 Jan 14 01:08:43.949000 audit: BPF prog-id=161 op=LOAD Jan 14 01:08:43.949000 audit[3387]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3254 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.066961 kernel: audit: type=1334 audit(1768352923.949:552): prog-id=161 op=LOAD Jan 14 01:08:44.067092 kernel: audit: type=1300 audit(1768352923.949:552): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3254 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:44.067162 kernel: audit: type=1327 audit(1768352923.949:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431373639303738313763343461313539663162636436373032393363 Jan 14 01:08:43.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431373639303738313763343461313539663162636436373032393363 Jan 14 01:08:43.949000 audit: BPF prog-id=162 op=LOAD Jan 14 01:08:43.949000 audit[3387]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3254 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:43.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431373639303738313763343461313539663162636436373032393363 Jan 14 01:08:43.949000 audit: BPF prog-id=162 op=UNLOAD Jan 14 01:08:43.949000 audit[3387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:43.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431373639303738313763343461313539663162636436373032393363 Jan 14 01:08:43.949000 audit: BPF prog-id=161 op=UNLOAD Jan 14 01:08:43.949000 audit[3387]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3254 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:43.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431373639303738313763343461313539663162636436373032393363 Jan 14 01:08:43.949000 audit: BPF prog-id=163 op=LOAD Jan 14 01:08:43.949000 audit[3387]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3254 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:43.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431373639303738313763343461313539663162636436373032393363 Jan 14 01:08:44.109555 containerd[1618]: time="2026-01-14T01:08:44.107044325Z" level=info msg="StartContainer for \"d176907817c44a159f1bcd670293ccce7012c560fa34b1b3ffcf50e09f5b5696\" returns successfully" Jan 14 01:08:44.468296 kubelet[2817]: E0114 01:08:44.468172 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:44.510221 kubelet[2817]: E0114 01:08:44.510003 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.510221 kubelet[2817]: W0114 01:08:44.510141 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.510221 kubelet[2817]: E0114 01:08:44.510168 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.517517 kubelet[2817]: E0114 01:08:44.517464 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.517517 kubelet[2817]: W0114 01:08:44.517483 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.517517 kubelet[2817]: E0114 01:08:44.517503 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.517517 kubelet[2817]: E0114 01:08:44.517828 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.517517 kubelet[2817]: W0114 01:08:44.517841 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.517517 kubelet[2817]: E0114 01:08:44.518038 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.523574 kubelet[2817]: E0114 01:08:44.518460 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.523574 kubelet[2817]: W0114 01:08:44.518472 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.523574 kubelet[2817]: E0114 01:08:44.518485 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.523574 kubelet[2817]: E0114 01:08:44.518720 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.523574 kubelet[2817]: W0114 01:08:44.518730 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.523574 kubelet[2817]: E0114 01:08:44.518742 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.523574 kubelet[2817]: E0114 01:08:44.519125 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.523574 kubelet[2817]: W0114 01:08:44.519134 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.523574 kubelet[2817]: E0114 01:08:44.519144 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.523574 kubelet[2817]: E0114 01:08:44.519343 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.523988 kubelet[2817]: W0114 01:08:44.519354 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.523988 kubelet[2817]: E0114 01:08:44.519363 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.527207 kubelet[2817]: E0114 01:08:44.525530 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.527207 kubelet[2817]: W0114 01:08:44.525689 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.527207 kubelet[2817]: E0114 01:08:44.525704 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.527207 kubelet[2817]: E0114 01:08:44.526051 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.527207 kubelet[2817]: W0114 01:08:44.526062 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.527207 kubelet[2817]: E0114 01:08:44.526074 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.527207 kubelet[2817]: E0114 01:08:44.526293 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.527207 kubelet[2817]: W0114 01:08:44.526303 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.527207 kubelet[2817]: E0114 01:08:44.526314 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.528062 kubelet[2817]: E0114 01:08:44.527659 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.528062 kubelet[2817]: W0114 01:08:44.527672 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.528062 kubelet[2817]: E0114 01:08:44.527685 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.529824 kubelet[2817]: E0114 01:08:44.529696 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.529824 kubelet[2817]: W0114 01:08:44.529744 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.529824 kubelet[2817]: E0114 01:08:44.529758 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.530267 kubelet[2817]: E0114 01:08:44.530070 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.530267 kubelet[2817]: W0114 01:08:44.530083 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.530267 kubelet[2817]: E0114 01:08:44.530096 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.536025 kubelet[2817]: E0114 01:08:44.530361 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.536025 kubelet[2817]: W0114 01:08:44.530372 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.536025 kubelet[2817]: E0114 01:08:44.530443 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.536025 kubelet[2817]: E0114 01:08:44.530650 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.536025 kubelet[2817]: W0114 01:08:44.530660 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.536025 kubelet[2817]: E0114 01:08:44.530673 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.595862 kubelet[2817]: E0114 01:08:44.595011 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.595862 kubelet[2817]: W0114 01:08:44.595170 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.595862 kubelet[2817]: E0114 01:08:44.595199 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.603522 kubelet[2817]: E0114 01:08:44.602276 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.603522 kubelet[2817]: W0114 01:08:44.602325 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.603522 kubelet[2817]: E0114 01:08:44.602684 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.607533 kubelet[2817]: E0114 01:08:44.604814 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.607533 kubelet[2817]: W0114 01:08:44.606985 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.607533 kubelet[2817]: E0114 01:08:44.607014 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.609213 kubelet[2817]: E0114 01:08:44.609155 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.609213 kubelet[2817]: W0114 01:08:44.609188 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.609319 kubelet[2817]: E0114 01:08:44.609245 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.609708 kubelet[2817]: E0114 01:08:44.609632 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.609708 kubelet[2817]: W0114 01:08:44.609681 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.610274 kubelet[2817]: E0114 01:08:44.609822 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.610341 kubelet[2817]: E0114 01:08:44.610323 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.610341 kubelet[2817]: W0114 01:08:44.610334 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.610713 kubelet[2817]: E0114 01:08:44.610620 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.611447 kubelet[2817]: E0114 01:08:44.611215 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.611519 kubelet[2817]: W0114 01:08:44.611452 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.611670 kubelet[2817]: E0114 01:08:44.611607 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.613110 kubelet[2817]: E0114 01:08:44.612452 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.613110 kubelet[2817]: W0114 01:08:44.612498 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.613110 kubelet[2817]: E0114 01:08:44.612696 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.614794 kubelet[2817]: E0114 01:08:44.614657 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.614794 kubelet[2817]: W0114 01:08:44.614709 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.615023 kubelet[2817]: E0114 01:08:44.614855 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.617078 kubelet[2817]: E0114 01:08:44.616997 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.617151 kubelet[2817]: W0114 01:08:44.617121 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.617675 kubelet[2817]: E0114 01:08:44.617621 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.625227 kubelet[2817]: E0114 01:08:44.623999 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.625227 kubelet[2817]: W0114 01:08:44.624016 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.625227 kubelet[2817]: E0114 01:08:44.624285 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.627049 kubelet[2817]: E0114 01:08:44.626712 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.634055 kubelet[2817]: W0114 01:08:44.626726 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.634055 kubelet[2817]: E0114 01:08:44.633528 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.643990 kubelet[2817]: I0114 01:08:44.643842 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68df746c86-vngpk" podStartSLOduration=2.063759537 podStartE2EDuration="5.643826685s" podCreationTimestamp="2026-01-14 01:08:39 +0000 UTC" firstStartedPulling="2026-01-14 01:08:40.031542141 +0000 UTC m=+32.458941995" lastFinishedPulling="2026-01-14 01:08:43.611609288 +0000 UTC m=+36.039009143" observedRunningTime="2026-01-14 01:08:44.631290356 +0000 UTC m=+37.058690220" watchObservedRunningTime="2026-01-14 01:08:44.643826685 +0000 UTC m=+37.071226569" Jan 14 01:08:44.652528 kubelet[2817]: E0114 01:08:44.650320 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.652528 kubelet[2817]: W0114 01:08:44.650336 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.652528 kubelet[2817]: E0114 01:08:44.650956 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.652528 kubelet[2817]: E0114 01:08:44.651044 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.652528 kubelet[2817]: W0114 01:08:44.651052 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.652528 kubelet[2817]: E0114 01:08:44.651195 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.653310 kubelet[2817]: E0114 01:08:44.653060 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.653310 kubelet[2817]: W0114 01:08:44.653078 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.653310 kubelet[2817]: E0114 01:08:44.653099 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.654122 kubelet[2817]: E0114 01:08:44.653962 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.657767 kubelet[2817]: W0114 01:08:44.657691 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.665266 kubelet[2817]: E0114 01:08:44.665241 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.666359 kubelet[2817]: E0114 01:08:44.666343 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.666767 kubelet[2817]: W0114 01:08:44.666521 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.666767 kubelet[2817]: E0114 01:08:44.666541 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.673955 kubelet[2817]: E0114 01:08:44.673771 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:44.673955 kubelet[2817]: W0114 01:08:44.673827 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:44.673955 kubelet[2817]: E0114 01:08:44.673845 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:44.741357 kubelet[2817]: E0114 01:08:44.733601 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:08:45.071749 containerd[1618]: time="2026-01-14T01:08:45.071266180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:45.092850 containerd[1618]: time="2026-01-14T01:08:45.087847928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 14 01:08:45.095976 containerd[1618]: time="2026-01-14T01:08:45.095827281Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:45.121993 containerd[1618]: time="2026-01-14T01:08:45.120993143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:45.126535 containerd[1618]: time="2026-01-14T01:08:45.125508595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.512751319s" Jan 14 01:08:45.126535 containerd[1618]: time="2026-01-14T01:08:45.125541126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:08:45.134038 containerd[1618]: time="2026-01-14T01:08:45.128650954Z" level=info msg="CreateContainer within sandbox \"549cfb71c9635d6b0a390ec41baac6798d91f25ceec43afe80bf428fe6c9935e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:08:45.228998 containerd[1618]: time="2026-01-14T01:08:45.226128525Z" level=info msg="Container 2d542f568add84f78d5206a6284f56534dc9439528ffbef472ab5c9404086128: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:45.279492 containerd[1618]: time="2026-01-14T01:08:45.278492784Z" level=info msg="CreateContainer within sandbox \"549cfb71c9635d6b0a390ec41baac6798d91f25ceec43afe80bf428fe6c9935e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2d542f568add84f78d5206a6284f56534dc9439528ffbef472ab5c9404086128\"" Jan 14 01:08:45.279673 containerd[1618]: time="2026-01-14T01:08:45.279640485Z" level=info msg="StartContainer for \"2d542f568add84f78d5206a6284f56534dc9439528ffbef472ab5c9404086128\"" Jan 14 01:08:45.283339 containerd[1618]: time="2026-01-14T01:08:45.283228599Z" level=info msg="connecting to shim 2d542f568add84f78d5206a6284f56534dc9439528ffbef472ab5c9404086128" address="unix:///run/containerd/s/1d419724286ef7d944f39c0094d6988ff6e5fe7111fc8b452b735fa11a325900" protocol=ttrpc version=3 Jan 14 01:08:45.401513 systemd[1]: Started cri-containerd-2d542f568add84f78d5206a6284f56534dc9439528ffbef472ab5c9404086128.scope - libcontainer container 2d542f568add84f78d5206a6284f56534dc9439528ffbef472ab5c9404086128. Jan 14 01:08:45.485976 kubelet[2817]: E0114 01:08:45.485332 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:45.549600 kubelet[2817]: E0114 01:08:45.549335 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.549600 kubelet[2817]: W0114 01:08:45.549363 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.549600 kubelet[2817]: E0114 01:08:45.549444 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.550858 kubelet[2817]: E0114 01:08:45.550769 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.550858 kubelet[2817]: W0114 01:08:45.550786 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.550858 kubelet[2817]: E0114 01:08:45.550803 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.555673 kubelet[2817]: E0114 01:08:45.555540 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.556525 kubelet[2817]: W0114 01:08:45.555559 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.557223 kubelet[2817]: E0114 01:08:45.556659 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.559673 kubelet[2817]: E0114 01:08:45.558871 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.559673 kubelet[2817]: W0114 01:08:45.558956 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.559673 kubelet[2817]: E0114 01:08:45.558974 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.561844 kubelet[2817]: E0114 01:08:45.561578 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.561844 kubelet[2817]: W0114 01:08:45.561591 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.561844 kubelet[2817]: E0114 01:08:45.561604 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.564647 kubelet[2817]: E0114 01:08:45.563864 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.564647 kubelet[2817]: W0114 01:08:45.563877 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.564647 kubelet[2817]: E0114 01:08:45.563963 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.571282 kubelet[2817]: E0114 01:08:45.570619 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.571282 kubelet[2817]: W0114 01:08:45.570643 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.571282 kubelet[2817]: E0114 01:08:45.570658 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.575977 kubelet[2817]: E0114 01:08:45.574646 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.576367 kubelet[2817]: W0114 01:08:45.576347 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.577670 kubelet[2817]: E0114 01:08:45.577541 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.583616 kubelet[2817]: E0114 01:08:45.583598 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.583760 kubelet[2817]: W0114 01:08:45.583686 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.583760 kubelet[2817]: E0114 01:08:45.583705 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.592607 kubelet[2817]: E0114 01:08:45.591075 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.592607 kubelet[2817]: W0114 01:08:45.591091 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.592607 kubelet[2817]: E0114 01:08:45.591108 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.593977 kubelet[2817]: E0114 01:08:45.593800 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.593977 kubelet[2817]: W0114 01:08:45.593817 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.593977 kubelet[2817]: E0114 01:08:45.593831 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.597572 kubelet[2817]: E0114 01:08:45.597471 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.597572 kubelet[2817]: W0114 01:08:45.597487 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.597572 kubelet[2817]: E0114 01:08:45.597500 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.602533 kubelet[2817]: E0114 01:08:45.599642 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.602533 kubelet[2817]: W0114 01:08:45.599685 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.602533 kubelet[2817]: E0114 01:08:45.599699 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.602845 kubelet[2817]: E0114 01:08:45.602720 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.602845 kubelet[2817]: W0114 01:08:45.602736 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.602845 kubelet[2817]: E0114 01:08:45.602749 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.603543 kubelet[2817]: E0114 01:08:45.603528 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.603676 kubelet[2817]: W0114 01:08:45.603603 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.603676 kubelet[2817]: E0114 01:08:45.603619 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.630531 kubelet[2817]: E0114 01:08:45.630308 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.630531 kubelet[2817]: W0114 01:08:45.630331 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.630531 kubelet[2817]: E0114 01:08:45.630351 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.636591 kubelet[2817]: E0114 01:08:45.636572 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.636813 kubelet[2817]: W0114 01:08:45.636692 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.639277 kubelet[2817]: E0114 01:08:45.636797 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.639529 kubelet[2817]: E0114 01:08:45.639511 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.639799 kubelet[2817]: W0114 01:08:45.639587 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.639799 kubelet[2817]: E0114 01:08:45.639609 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.640131 kubelet[2817]: E0114 01:08:45.640047 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.640131 kubelet[2817]: W0114 01:08:45.640092 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.640131 kubelet[2817]: E0114 01:08:45.640112 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.641523 kubelet[2817]: E0114 01:08:45.640489 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.641523 kubelet[2817]: W0114 01:08:45.640533 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.641523 kubelet[2817]: E0114 01:08:45.640545 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.645002 kubelet[2817]: E0114 01:08:45.643773 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.645816 kubelet[2817]: W0114 01:08:45.645178 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.645816 kubelet[2817]: E0114 01:08:45.645239 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.647865 kubelet[2817]: E0114 01:08:45.646976 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.647865 kubelet[2817]: W0114 01:08:45.647006 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.647865 kubelet[2817]: E0114 01:08:45.647487 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.648000 audit: BPF prog-id=164 op=LOAD Jan 14 01:08:45.648000 audit[3463]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3336 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:45.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264353432663536386164643834663738643532303661363238346635 Jan 14 01:08:45.648000 audit: BPF prog-id=165 op=LOAD Jan 14 01:08:45.648000 audit[3463]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3336 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:45.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264353432663536386164643834663738643532303661363238346635 Jan 14 01:08:45.648000 audit: BPF prog-id=165 op=UNLOAD Jan 14 01:08:45.648000 audit[3463]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:45.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264353432663536386164643834663738643532303661363238346635 Jan 14 01:08:45.648000 audit: BPF prog-id=164 op=UNLOAD Jan 14 01:08:45.648000 audit[3463]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:45.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264353432663536386164643834663738643532303661363238346635 Jan 14 01:08:45.648000 audit: BPF prog-id=166 op=LOAD Jan 14 01:08:45.648000 audit[3463]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3336 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:45.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264353432663536386164643834663738643532303661363238346635 Jan 14 01:08:45.659664 kubelet[2817]: E0114 01:08:45.649128 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.659664 kubelet[2817]: W0114 01:08:45.649143 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.659664 kubelet[2817]: E0114 01:08:45.653588 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.665516 kubelet[2817]: E0114 01:08:45.663162 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.665516 kubelet[2817]: W0114 01:08:45.663180 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.665516 kubelet[2817]: E0114 01:08:45.663199 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.667066 kubelet[2817]: E0114 01:08:45.667049 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.667256 kubelet[2817]: W0114 01:08:45.667176 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.668188 kubelet[2817]: E0114 01:08:45.667642 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.675549 kubelet[2817]: E0114 01:08:45.669281 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.675549 kubelet[2817]: W0114 01:08:45.675014 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.675549 kubelet[2817]: E0114 01:08:45.675134 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.679670 kubelet[2817]: E0114 01:08:45.679653 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.679771 kubelet[2817]: W0114 01:08:45.679752 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.680277 kubelet[2817]: E0114 01:08:45.680107 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.684183 kubelet[2817]: E0114 01:08:45.683241 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.684183 kubelet[2817]: W0114 01:08:45.683258 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.684183 kubelet[2817]: E0114 01:08:45.683322 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.685333 kubelet[2817]: E0114 01:08:45.685223 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.685671 kubelet[2817]: W0114 01:08:45.685486 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.685671 kubelet[2817]: E0114 01:08:45.685514 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.687000 audit[3515]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:45.687000 audit[3515]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd03d39da0 a2=0 a3=7ffd03d39d8c items=0 ppid=2971 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:45.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:45.691714 kubelet[2817]: E0114 01:08:45.691049 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.691714 kubelet[2817]: W0114 01:08:45.691064 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.691714 kubelet[2817]: E0114 01:08:45.691093 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.693222 kubelet[2817]: E0114 01:08:45.693180 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.693276 kubelet[2817]: W0114 01:08:45.693224 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.693554 kubelet[2817]: E0114 01:08:45.693346 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.694968 kubelet[2817]: E0114 01:08:45.694758 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.694968 kubelet[2817]: W0114 01:08:45.694775 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.694968 kubelet[2817]: E0114 01:08:45.694800 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.695430 kubelet[2817]: E0114 01:08:45.695362 2817 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:08:45.695518 kubelet[2817]: W0114 01:08:45.695502 2817 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:08:45.695598 kubelet[2817]: E0114 01:08:45.695580 2817 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:08:45.700000 audit[3515]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=3515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:08:45.700000 audit[3515]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd03d39da0 a2=0 a3=7ffd03d39d8c items=0 ppid=2971 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:45.700000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:08:45.780026 containerd[1618]: time="2026-01-14T01:08:45.779961431Z" level=info msg="StartContainer for \"2d542f568add84f78d5206a6284f56534dc9439528ffbef472ab5c9404086128\" returns successfully" Jan 14 01:08:45.826229 systemd[1]: cri-containerd-2d542f568add84f78d5206a6284f56534dc9439528ffbef472ab5c9404086128.scope: Deactivated successfully. Jan 14 01:08:45.834000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:08:45.854316 containerd[1618]: time="2026-01-14T01:08:45.853974219Z" level=info msg="received container exit event container_id:\"2d542f568add84f78d5206a6284f56534dc9439528ffbef472ab5c9404086128\" id:\"2d542f568add84f78d5206a6284f56534dc9439528ffbef472ab5c9404086128\" pid:3476 exited_at:{seconds:1768352925 nanos:849114777}" Jan 14 01:08:46.009228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d542f568add84f78d5206a6284f56534dc9439528ffbef472ab5c9404086128-rootfs.mount: Deactivated successfully. Jan 14 01:08:46.512618 kubelet[2817]: E0114 01:08:46.510500 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:46.512618 kubelet[2817]: E0114 01:08:46.511258 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:46.515802 containerd[1618]: time="2026-01-14T01:08:46.515509582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:08:46.733184 kubelet[2817]: E0114 01:08:46.732498 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:08:47.517564 kubelet[2817]: E0114 01:08:47.517365 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:48.732962 kubelet[2817]: E0114 01:08:48.732817 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:08:50.732486 kubelet[2817]: E0114 01:08:50.731593 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:08:52.708095 containerd[1618]: time="2026-01-14T01:08:52.707835420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:52.712619 containerd[1618]: time="2026-01-14T01:08:52.712099328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 01:08:52.717030 containerd[1618]: time="2026-01-14T01:08:52.716767260Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:52.733748 kubelet[2817]: E0114 01:08:52.733668 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:08:52.735644 containerd[1618]: time="2026-01-14T01:08:52.734479688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:08:52.735644 containerd[1618]: time="2026-01-14T01:08:52.735149545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.219589459s" Jan 14 01:08:52.735644 containerd[1618]: time="2026-01-14T01:08:52.735182357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:08:52.742226 containerd[1618]: time="2026-01-14T01:08:52.742145713Z" level=info msg="CreateContainer within sandbox \"549cfb71c9635d6b0a390ec41baac6798d91f25ceec43afe80bf428fe6c9935e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:08:52.767847 containerd[1618]: time="2026-01-14T01:08:52.766109038Z" level=info msg="Container 1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:08:52.784625 containerd[1618]: time="2026-01-14T01:08:52.784496379Z" level=info msg="CreateContainer within sandbox \"549cfb71c9635d6b0a390ec41baac6798d91f25ceec43afe80bf428fe6c9935e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254\"" Jan 14 01:08:52.786775 containerd[1618]: time="2026-01-14T01:08:52.786714992Z" level=info msg="StartContainer for \"1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254\"" Jan 14 01:08:52.793012 containerd[1618]: time="2026-01-14T01:08:52.792361619Z" level=info msg="connecting to shim 1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254" address="unix:///run/containerd/s/1d419724286ef7d944f39c0094d6988ff6e5fe7111fc8b452b735fa11a325900" protocol=ttrpc version=3 Jan 14 01:08:52.868326 systemd[1]: Started cri-containerd-1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254.scope - libcontainer container 1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254. Jan 14 01:08:53.006345 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 14 01:08:53.006526 kernel: audit: type=1334 audit(1768352932.995:565): prog-id=167 op=LOAD Jan 14 01:08:52.995000 audit: BPF prog-id=167 op=LOAD Jan 14 01:08:52.995000 audit[3559]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3336 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:53.031871 kernel: audit: type=1300 audit(1768352932.995:565): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3336 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:52.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386332643430363336663033633735303239306439633065636230 Jan 14 01:08:52.995000 audit: BPF prog-id=168 op=LOAD Jan 14 01:08:53.049167 kernel: audit: type=1327 audit(1768352932.995:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386332643430363336663033633735303239306439633065636230 Jan 14 01:08:53.049244 kernel: audit: type=1334 audit(1768352932.995:566): prog-id=168 op=LOAD Jan 14 01:08:53.049630 kernel: audit: type=1300 audit(1768352932.995:566): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3336 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:52.995000 audit[3559]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3336 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:53.064502 kernel: audit: type=1327 audit(1768352932.995:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386332643430363336663033633735303239306439633065636230 Jan 14 01:08:52.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386332643430363336663033633735303239306439633065636230 Jan 14 01:08:53.090395 kernel: audit: type=1334 audit(1768352932.995:567): prog-id=168 op=UNLOAD Jan 14 01:08:52.995000 audit: BPF prog-id=168 op=UNLOAD Jan 14 01:08:52.995000 audit[3559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:53.116049 kernel: audit: type=1300 audit(1768352932.995:567): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:53.116127 kernel: audit: type=1327 audit(1768352932.995:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386332643430363336663033633735303239306439633065636230 Jan 14 01:08:52.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386332643430363336663033633735303239306439633065636230 Jan 14 01:08:53.129219 containerd[1618]: time="2026-01-14T01:08:53.129126723Z" level=info msg="StartContainer for \"1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254\" returns successfully" Jan 14 01:08:53.141194 kernel: audit: type=1334 audit(1768352932.995:568): prog-id=167 op=UNLOAD Jan 14 01:08:52.995000 audit: BPF prog-id=167 op=UNLOAD Jan 14 01:08:52.995000 audit[3559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:52.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386332643430363336663033633735303239306439633065636230 Jan 14 01:08:52.995000 audit: BPF prog-id=169 op=LOAD Jan 14 01:08:52.995000 audit[3559]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3336 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:08:52.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162386332643430363336663033633735303239306439633065636230 Jan 14 01:08:53.558612 kubelet[2817]: E0114 01:08:53.558405 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:54.562152 kubelet[2817]: E0114 01:08:54.562039 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:54.587794 systemd[1]: cri-containerd-1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254.scope: Deactivated successfully. Jan 14 01:08:54.588343 systemd[1]: cri-containerd-1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254.scope: Consumed 1.049s CPU time, 176.2M memory peak, 3.7M read from disk, 171.3M written to disk. Jan 14 01:08:54.600000 audit: BPF prog-id=169 op=UNLOAD Jan 14 01:08:54.602087 containerd[1618]: time="2026-01-14T01:08:54.601823492Z" level=info msg="received container exit event container_id:\"1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254\" id:\"1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254\" pid:3571 exited_at:{seconds:1768352934 nanos:595187099}" Jan 14 01:08:54.707324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b8c2d40636f03c750290d9c0ecb049946201805e68b5bf15f79d195e52fd254-rootfs.mount: Deactivated successfully. Jan 14 01:08:54.748630 kubelet[2817]: E0114 01:08:54.745317 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:08:54.795518 kubelet[2817]: I0114 01:08:54.795159 2817 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 01:08:54.915106 systemd[1]: Created slice kubepods-burstable-pod8cce9086_b715_4037_a543_6640cc00d61c.slice - libcontainer container kubepods-burstable-pod8cce9086_b715_4037_a543_6640cc00d61c.slice. Jan 14 01:08:54.947098 systemd[1]: Created slice kubepods-besteffort-podf73e61d7_350a_471c_9476_00cd84fadf64.slice - libcontainer container kubepods-besteffort-podf73e61d7_350a_471c_9476_00cd84fadf64.slice. Jan 14 01:08:54.962176 systemd[1]: Created slice kubepods-burstable-pod578dbb0c_1de2_46fa_a95a_68290169880c.slice - libcontainer container kubepods-burstable-pod578dbb0c_1de2_46fa_a95a_68290169880c.slice. Jan 14 01:08:54.992802 systemd[1]: Created slice kubepods-besteffort-podb1255f7d_606b_4b44_9160_25a609a72f97.slice - libcontainer container kubepods-besteffort-podb1255f7d_606b_4b44_9160_25a609a72f97.slice. Jan 14 01:08:55.001442 kubelet[2817]: I0114 01:08:55.000712 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/31330de6-3f41-4f44-bd94-776d84913764-calico-apiserver-certs\") pod \"calico-apiserver-b64b7c788-2crrf\" (UID: \"31330de6-3f41-4f44-bd94-776d84913764\") " pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" Jan 14 01:08:55.001442 kubelet[2817]: I0114 01:08:55.000773 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/578dbb0c-1de2-46fa-a95a-68290169880c-config-volume\") pod \"coredns-668d6bf9bc-h7hzf\" (UID: \"578dbb0c-1de2-46fa-a95a-68290169880c\") " pod="kube-system/coredns-668d6bf9bc-h7hzf" Jan 14 01:08:55.001442 kubelet[2817]: I0114 01:08:55.000802 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6cd6e89-0e6a-47aa-ae56-5f40f27190c0-goldmane-ca-bundle\") pod \"goldmane-666569f655-8zxlx\" (UID: \"f6cd6e89-0e6a-47aa-ae56-5f40f27190c0\") " pod="calico-system/goldmane-666569f655-8zxlx" Jan 14 01:08:55.001442 kubelet[2817]: I0114 01:08:55.000825 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f6cd6e89-0e6a-47aa-ae56-5f40f27190c0-goldmane-key-pair\") pod \"goldmane-666569f655-8zxlx\" (UID: \"f6cd6e89-0e6a-47aa-ae56-5f40f27190c0\") " pod="calico-system/goldmane-666569f655-8zxlx" Jan 14 01:08:55.001442 kubelet[2817]: I0114 01:08:55.000849 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f73e61d7-350a-471c-9476-00cd84fadf64-tigera-ca-bundle\") pod \"calico-kube-controllers-77fd4f6b7c-sb5qb\" (UID: \"f73e61d7-350a-471c-9476-00cd84fadf64\") " pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" Jan 14 01:08:55.001801 kubelet[2817]: I0114 01:08:55.000872 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ch2n\" (UniqueName: \"kubernetes.io/projected/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-kube-api-access-9ch2n\") pod \"whisker-785d8fcbf4-jxss5\" (UID: \"42853ca5-c0c5-4fa1-ac72-7982d8343ca8\") " pod="calico-system/whisker-785d8fcbf4-jxss5" Jan 14 01:08:55.001801 kubelet[2817]: I0114 01:08:55.001073 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfgp\" (UniqueName: \"kubernetes.io/projected/578dbb0c-1de2-46fa-a95a-68290169880c-kube-api-access-qcfgp\") pod \"coredns-668d6bf9bc-h7hzf\" (UID: \"578dbb0c-1de2-46fa-a95a-68290169880c\") " pod="kube-system/coredns-668d6bf9bc-h7hzf" Jan 14 01:08:55.001801 kubelet[2817]: I0114 01:08:55.001096 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj2ql\" (UniqueName: \"kubernetes.io/projected/31330de6-3f41-4f44-bd94-776d84913764-kube-api-access-vj2ql\") pod \"calico-apiserver-b64b7c788-2crrf\" (UID: \"31330de6-3f41-4f44-bd94-776d84913764\") " pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" Jan 14 01:08:55.001801 kubelet[2817]: I0114 01:08:55.001120 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrz94\" (UniqueName: \"kubernetes.io/projected/8cce9086-b715-4037-a543-6640cc00d61c-kube-api-access-wrz94\") pod \"coredns-668d6bf9bc-d4h48\" (UID: \"8cce9086-b715-4037-a543-6640cc00d61c\") " pod="kube-system/coredns-668d6bf9bc-d4h48" Jan 14 01:08:55.001801 kubelet[2817]: I0114 01:08:55.001145 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b1255f7d-606b-4b44-9160-25a609a72f97-calico-apiserver-certs\") pod \"calico-apiserver-b64b7c788-f89f6\" (UID: \"b1255f7d-606b-4b44-9160-25a609a72f97\") " pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" Jan 14 01:08:55.002110 kubelet[2817]: I0114 01:08:55.001177 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8x65\" (UniqueName: \"kubernetes.io/projected/b1255f7d-606b-4b44-9160-25a609a72f97-kube-api-access-l8x65\") pod \"calico-apiserver-b64b7c788-f89f6\" (UID: \"b1255f7d-606b-4b44-9160-25a609a72f97\") " pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" Jan 14 01:08:55.002110 kubelet[2817]: I0114 01:08:55.001214 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7lnb\" (UniqueName: \"kubernetes.io/projected/f73e61d7-350a-471c-9476-00cd84fadf64-kube-api-access-s7lnb\") pod \"calico-kube-controllers-77fd4f6b7c-sb5qb\" (UID: \"f73e61d7-350a-471c-9476-00cd84fadf64\") " pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" Jan 14 01:08:55.002110 kubelet[2817]: I0114 01:08:55.001249 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zmvf\" (UniqueName: \"kubernetes.io/projected/f6cd6e89-0e6a-47aa-ae56-5f40f27190c0-kube-api-access-2zmvf\") pod \"goldmane-666569f655-8zxlx\" (UID: \"f6cd6e89-0e6a-47aa-ae56-5f40f27190c0\") " pod="calico-system/goldmane-666569f655-8zxlx" Jan 14 01:08:55.002110 kubelet[2817]: I0114 01:08:55.001272 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-whisker-backend-key-pair\") pod \"whisker-785d8fcbf4-jxss5\" (UID: \"42853ca5-c0c5-4fa1-ac72-7982d8343ca8\") " pod="calico-system/whisker-785d8fcbf4-jxss5" Jan 14 01:08:55.002110 kubelet[2817]: I0114 01:08:55.001309 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-whisker-ca-bundle\") pod \"whisker-785d8fcbf4-jxss5\" (UID: \"42853ca5-c0c5-4fa1-ac72-7982d8343ca8\") " pod="calico-system/whisker-785d8fcbf4-jxss5" Jan 14 01:08:55.002282 kubelet[2817]: I0114 01:08:55.001331 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8cce9086-b715-4037-a543-6640cc00d61c-config-volume\") pod \"coredns-668d6bf9bc-d4h48\" (UID: \"8cce9086-b715-4037-a543-6640cc00d61c\") " pod="kube-system/coredns-668d6bf9bc-d4h48" Jan 14 01:08:55.002282 kubelet[2817]: I0114 01:08:55.001354 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cd6e89-0e6a-47aa-ae56-5f40f27190c0-config\") pod \"goldmane-666569f655-8zxlx\" (UID: \"f6cd6e89-0e6a-47aa-ae56-5f40f27190c0\") " pod="calico-system/goldmane-666569f655-8zxlx" Jan 14 01:08:55.036629 systemd[1]: Created slice kubepods-besteffort-podf6cd6e89_0e6a_47aa_ae56_5f40f27190c0.slice - libcontainer container kubepods-besteffort-podf6cd6e89_0e6a_47aa_ae56_5f40f27190c0.slice. Jan 14 01:08:55.058580 systemd[1]: Created slice kubepods-besteffort-pod42853ca5_c0c5_4fa1_ac72_7982d8343ca8.slice - libcontainer container kubepods-besteffort-pod42853ca5_c0c5_4fa1_ac72_7982d8343ca8.slice. Jan 14 01:08:55.072680 systemd[1]: Created slice kubepods-besteffort-pod31330de6_3f41_4f44_bd94_776d84913764.slice - libcontainer container kubepods-besteffort-pod31330de6_3f41_4f44_bd94_776d84913764.slice. Jan 14 01:08:55.242690 kubelet[2817]: E0114 01:08:55.240828 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:55.252599 containerd[1618]: time="2026-01-14T01:08:55.252549895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4h48,Uid:8cce9086-b715-4037-a543-6640cc00d61c,Namespace:kube-system,Attempt:0,}" Jan 14 01:08:55.260684 containerd[1618]: time="2026-01-14T01:08:55.258851895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fd4f6b7c-sb5qb,Uid:f73e61d7-350a-471c-9476-00cd84fadf64,Namespace:calico-system,Attempt:0,}" Jan 14 01:08:55.279166 kubelet[2817]: E0114 01:08:55.279107 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:55.286619 containerd[1618]: time="2026-01-14T01:08:55.284665289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7hzf,Uid:578dbb0c-1de2-46fa-a95a-68290169880c,Namespace:kube-system,Attempt:0,}" Jan 14 01:08:55.304262 containerd[1618]: time="2026-01-14T01:08:55.304192313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b64b7c788-f89f6,Uid:b1255f7d-606b-4b44-9160-25a609a72f97,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:08:55.364808 containerd[1618]: time="2026-01-14T01:08:55.364571431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8zxlx,Uid:f6cd6e89-0e6a-47aa-ae56-5f40f27190c0,Namespace:calico-system,Attempt:0,}" Jan 14 01:08:55.375390 containerd[1618]: time="2026-01-14T01:08:55.375307946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785d8fcbf4-jxss5,Uid:42853ca5-c0c5-4fa1-ac72-7982d8343ca8,Namespace:calico-system,Attempt:0,}" Jan 14 01:08:55.389301 containerd[1618]: time="2026-01-14T01:08:55.387738233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b64b7c788-2crrf,Uid:31330de6-3f41-4f44-bd94-776d84913764,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:08:55.580213 kubelet[2817]: E0114 01:08:55.579747 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:08:55.585749 containerd[1618]: time="2026-01-14T01:08:55.585714168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:08:55.935055 containerd[1618]: time="2026-01-14T01:08:55.934997227Z" level=error msg="Failed to destroy network for sandbox \"a974169718002b832da2f9becbbfb09a126596c81a9198713475fd78f3fb9c6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:55.944264 systemd[1]: run-netns-cni\x2d767b0225\x2d2e46\x2dd4e6\x2d2843\x2d4ffa7cd05e45.mount: Deactivated successfully. Jan 14 01:08:55.987077 containerd[1618]: time="2026-01-14T01:08:55.987016672Z" level=error msg="Failed to destroy network for sandbox \"7d395ee83d23f3aaf7a7986db8cb2c9f31237069242c035af6f56391fde7a70e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:55.988036 containerd[1618]: time="2026-01-14T01:08:55.987998192Z" level=error msg="Failed to destroy network for sandbox \"6b191d455360d712ce5cb0b8a67cf9609ee8b21bc2acc22f70b5f638e14802cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:55.992869 systemd[1]: run-netns-cni\x2d31ed5bbb\x2d9e69\x2dd552\x2db970\x2daae99701aa4e.mount: Deactivated successfully. Jan 14 01:08:55.993134 systemd[1]: run-netns-cni\x2d3d2bcaaf\x2d6b33\x2de9bd\x2d67b3\x2d03e40dd9784d.mount: Deactivated successfully. Jan 14 01:08:55.994401 containerd[1618]: time="2026-01-14T01:08:55.994217682Z" level=error msg="Failed to destroy network for sandbox \"407860a56c464478fa736ef2e8c62dd23eb1506d30132a5f9b228e301ef69d40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:55.996446 containerd[1618]: time="2026-01-14T01:08:55.994235714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b64b7c788-2crrf,Uid:31330de6-3f41-4f44-bd94-776d84913764,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a974169718002b832da2f9becbbfb09a126596c81a9198713475fd78f3fb9c6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:55.997120 systemd[1]: run-netns-cni\x2d3f6b57eb\x2d21e0\x2d5796\x2dee79\x2dcb17bc0884cd.mount: Deactivated successfully. Jan 14 01:08:55.998556 kubelet[2817]: E0114 01:08:55.997272 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a974169718002b832da2f9becbbfb09a126596c81a9198713475fd78f3fb9c6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:55.998556 kubelet[2817]: E0114 01:08:55.997357 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a974169718002b832da2f9becbbfb09a126596c81a9198713475fd78f3fb9c6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" Jan 14 01:08:55.998556 kubelet[2817]: E0114 01:08:55.997388 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a974169718002b832da2f9becbbfb09a126596c81a9198713475fd78f3fb9c6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" Jan 14 01:08:55.998697 kubelet[2817]: E0114 01:08:55.997443 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b64b7c788-2crrf_calico-apiserver(31330de6-3f41-4f44-bd94-776d84913764)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b64b7c788-2crrf_calico-apiserver(31330de6-3f41-4f44-bd94-776d84913764)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a974169718002b832da2f9becbbfb09a126596c81a9198713475fd78f3fb9c6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:08:56.001846 containerd[1618]: time="2026-01-14T01:08:56.001801969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8zxlx,Uid:f6cd6e89-0e6a-47aa-ae56-5f40f27190c0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b191d455360d712ce5cb0b8a67cf9609ee8b21bc2acc22f70b5f638e14802cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.003044 kubelet[2817]: E0114 01:08:56.002981 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b191d455360d712ce5cb0b8a67cf9609ee8b21bc2acc22f70b5f638e14802cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.003044 kubelet[2817]: E0114 01:08:56.003023 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b191d455360d712ce5cb0b8a67cf9609ee8b21bc2acc22f70b5f638e14802cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8zxlx" Jan 14 01:08:56.003258 kubelet[2817]: E0114 01:08:56.003187 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b191d455360d712ce5cb0b8a67cf9609ee8b21bc2acc22f70b5f638e14802cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8zxlx" Jan 14 01:08:56.003318 kubelet[2817]: E0114 01:08:56.003269 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8zxlx_calico-system(f6cd6e89-0e6a-47aa-ae56-5f40f27190c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8zxlx_calico-system(f6cd6e89-0e6a-47aa-ae56-5f40f27190c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b191d455360d712ce5cb0b8a67cf9609ee8b21bc2acc22f70b5f638e14802cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:08:56.007271 containerd[1618]: time="2026-01-14T01:08:56.007197875Z" level=error msg="Failed to destroy network for sandbox \"ca844dc7e18952facc93bea5c90633fe3774ad906046e2bb6743eb81e0facba9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.011366 systemd[1]: run-netns-cni\x2d9fe8c508\x2d62d5\x2d4151\x2d7e14\x2d937f6ea250a8.mount: Deactivated successfully. Jan 14 01:08:56.023692 containerd[1618]: time="2026-01-14T01:08:56.023140785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7hzf,Uid:578dbb0c-1de2-46fa-a95a-68290169880c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"407860a56c464478fa736ef2e8c62dd23eb1506d30132a5f9b228e301ef69d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.024011 kubelet[2817]: E0114 01:08:56.023531 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"407860a56c464478fa736ef2e8c62dd23eb1506d30132a5f9b228e301ef69d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.024011 kubelet[2817]: E0114 01:08:56.023984 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"407860a56c464478fa736ef2e8c62dd23eb1506d30132a5f9b228e301ef69d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h7hzf" Jan 14 01:08:56.024145 kubelet[2817]: E0114 01:08:56.024103 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"407860a56c464478fa736ef2e8c62dd23eb1506d30132a5f9b228e301ef69d40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h7hzf" Jan 14 01:08:56.024718 kubelet[2817]: E0114 01:08:56.024369 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-h7hzf_kube-system(578dbb0c-1de2-46fa-a95a-68290169880c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-h7hzf_kube-system(578dbb0c-1de2-46fa-a95a-68290169880c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"407860a56c464478fa736ef2e8c62dd23eb1506d30132a5f9b228e301ef69d40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h7hzf" podUID="578dbb0c-1de2-46fa-a95a-68290169880c" Jan 14 01:08:56.028880 containerd[1618]: time="2026-01-14T01:08:56.028669449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fd4f6b7c-sb5qb,Uid:f73e61d7-350a-471c-9476-00cd84fadf64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d395ee83d23f3aaf7a7986db8cb2c9f31237069242c035af6f56391fde7a70e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.029160 kubelet[2817]: E0114 01:08:56.029069 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d395ee83d23f3aaf7a7986db8cb2c9f31237069242c035af6f56391fde7a70e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.029209 kubelet[2817]: E0114 01:08:56.029179 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d395ee83d23f3aaf7a7986db8cb2c9f31237069242c035af6f56391fde7a70e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" Jan 14 01:08:56.029233 kubelet[2817]: E0114 01:08:56.029212 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d395ee83d23f3aaf7a7986db8cb2c9f31237069242c035af6f56391fde7a70e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" Jan 14 01:08:56.029354 kubelet[2817]: E0114 01:08:56.029263 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77fd4f6b7c-sb5qb_calico-system(f73e61d7-350a-471c-9476-00cd84fadf64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77fd4f6b7c-sb5qb_calico-system(f73e61d7-350a-471c-9476-00cd84fadf64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d395ee83d23f3aaf7a7986db8cb2c9f31237069242c035af6f56391fde7a70e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:08:56.048334 containerd[1618]: time="2026-01-14T01:08:56.048230154Z" level=error msg="Failed to destroy network for sandbox \"4214ff5f868b57498b88c2c4359a335a85f7bec027fb8014c4ab22849ad75d38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.050035 containerd[1618]: time="2026-01-14T01:08:56.049973547Z" level=error msg="Failed to destroy network for sandbox \"060eba854c28996190009ca852687211bf2987fa2df0e2e159747f5dd638a18f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.061639 containerd[1618]: time="2026-01-14T01:08:56.061309399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785d8fcbf4-jxss5,Uid:42853ca5-c0c5-4fa1-ac72-7982d8343ca8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca844dc7e18952facc93bea5c90633fe3774ad906046e2bb6743eb81e0facba9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.062005 kubelet[2817]: E0114 01:08:56.061865 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca844dc7e18952facc93bea5c90633fe3774ad906046e2bb6743eb81e0facba9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.062141 kubelet[2817]: E0114 01:08:56.062009 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca844dc7e18952facc93bea5c90633fe3774ad906046e2bb6743eb81e0facba9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-785d8fcbf4-jxss5" Jan 14 01:08:56.062201 kubelet[2817]: E0114 01:08:56.062036 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca844dc7e18952facc93bea5c90633fe3774ad906046e2bb6743eb81e0facba9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-785d8fcbf4-jxss5" Jan 14 01:08:56.062269 kubelet[2817]: E0114 01:08:56.062236 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-785d8fcbf4-jxss5_calico-system(42853ca5-c0c5-4fa1-ac72-7982d8343ca8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-785d8fcbf4-jxss5_calico-system(42853ca5-c0c5-4fa1-ac72-7982d8343ca8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca844dc7e18952facc93bea5c90633fe3774ad906046e2bb6743eb81e0facba9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-785d8fcbf4-jxss5" podUID="42853ca5-c0c5-4fa1-ac72-7982d8343ca8" Jan 14 01:08:56.070868 containerd[1618]: time="2026-01-14T01:08:56.070591015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4h48,Uid:8cce9086-b715-4037-a543-6640cc00d61c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"060eba854c28996190009ca852687211bf2987fa2df0e2e159747f5dd638a18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.071099 kubelet[2817]: E0114 01:08:56.070804 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060eba854c28996190009ca852687211bf2987fa2df0e2e159747f5dd638a18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.071099 kubelet[2817]: E0114 01:08:56.070843 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060eba854c28996190009ca852687211bf2987fa2df0e2e159747f5dd638a18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d4h48" Jan 14 01:08:56.071099 kubelet[2817]: E0114 01:08:56.070864 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060eba854c28996190009ca852687211bf2987fa2df0e2e159747f5dd638a18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d4h48" Jan 14 01:08:56.071223 kubelet[2817]: E0114 01:08:56.070986 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d4h48_kube-system(8cce9086-b715-4037-a543-6640cc00d61c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d4h48_kube-system(8cce9086-b715-4037-a543-6640cc00d61c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"060eba854c28996190009ca852687211bf2987fa2df0e2e159747f5dd638a18f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d4h48" podUID="8cce9086-b715-4037-a543-6640cc00d61c" Jan 14 01:08:56.077289 containerd[1618]: time="2026-01-14T01:08:56.077135938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b64b7c788-f89f6,Uid:b1255f7d-606b-4b44-9160-25a609a72f97,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4214ff5f868b57498b88c2c4359a335a85f7bec027fb8014c4ab22849ad75d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.077727 kubelet[2817]: E0114 01:08:56.077594 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4214ff5f868b57498b88c2c4359a335a85f7bec027fb8014c4ab22849ad75d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.077787 kubelet[2817]: E0114 01:08:56.077740 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4214ff5f868b57498b88c2c4359a335a85f7bec027fb8014c4ab22849ad75d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" Jan 14 01:08:56.077787 kubelet[2817]: E0114 01:08:56.077767 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4214ff5f868b57498b88c2c4359a335a85f7bec027fb8014c4ab22849ad75d38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" Jan 14 01:08:56.078003 kubelet[2817]: E0114 01:08:56.077816 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b64b7c788-f89f6_calico-apiserver(b1255f7d-606b-4b44-9160-25a609a72f97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b64b7c788-f89f6_calico-apiserver(b1255f7d-606b-4b44-9160-25a609a72f97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4214ff5f868b57498b88c2c4359a335a85f7bec027fb8014c4ab22849ad75d38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:08:56.710690 systemd[1]: run-netns-cni\x2dd8feb6c1\x2d4bd2\x2dbbf0\x2dc52c\x2d7b922d5afefb.mount: Deactivated successfully. Jan 14 01:08:56.711359 systemd[1]: run-netns-cni\x2deff76508\x2d26da\x2dcb41\x2d600a\x2d2ee96c939360.mount: Deactivated successfully. Jan 14 01:08:56.764268 systemd[1]: Created slice kubepods-besteffort-podba6f7f37_698f_4697_a408_a3efabbcf48e.slice - libcontainer container kubepods-besteffort-podba6f7f37_698f_4697_a408_a3efabbcf48e.slice. Jan 14 01:08:56.779179 containerd[1618]: time="2026-01-14T01:08:56.778260042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8jc6,Uid:ba6f7f37-698f-4697-a408-a3efabbcf48e,Namespace:calico-system,Attempt:0,}" Jan 14 01:08:56.979456 containerd[1618]: time="2026-01-14T01:08:56.977874271Z" level=error msg="Failed to destroy network for sandbox \"14babb77116110b34087d1ae3021acb96a23aa9c1e917f2ccd7c456b912b153c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.981480 systemd[1]: run-netns-cni\x2d93fd28bf\x2d0007\x2ddc62\x2dcd72\x2d295172993657.mount: Deactivated successfully. Jan 14 01:08:56.997747 containerd[1618]: time="2026-01-14T01:08:56.997481058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8jc6,Uid:ba6f7f37-698f-4697-a408-a3efabbcf48e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14babb77116110b34087d1ae3021acb96a23aa9c1e917f2ccd7c456b912b153c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.998191 kubelet[2817]: E0114 01:08:56.997995 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14babb77116110b34087d1ae3021acb96a23aa9c1e917f2ccd7c456b912b153c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:08:56.998191 kubelet[2817]: E0114 01:08:56.998114 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14babb77116110b34087d1ae3021acb96a23aa9c1e917f2ccd7c456b912b153c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8jc6" Jan 14 01:08:56.998191 kubelet[2817]: E0114 01:08:56.998140 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14babb77116110b34087d1ae3021acb96a23aa9c1e917f2ccd7c456b912b153c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q8jc6" Jan 14 01:08:56.998814 kubelet[2817]: E0114 01:08:56.998196 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14babb77116110b34087d1ae3021acb96a23aa9c1e917f2ccd7c456b912b153c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:09:06.741519 containerd[1618]: time="2026-01-14T01:09:06.741472075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b64b7c788-2crrf,Uid:31330de6-3f41-4f44-bd94-776d84913764,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:09:06.743819 containerd[1618]: time="2026-01-14T01:09:06.742652268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785d8fcbf4-jxss5,Uid:42853ca5-c0c5-4fa1-ac72-7982d8343ca8,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:07.000994 containerd[1618]: time="2026-01-14T01:09:06.998991811Z" level=error msg="Failed to destroy network for sandbox \"b8717561f06c7b086f5503f502bf6a0643a2a24ac32eaefbc573e2c5d7f53385\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:07.006122 systemd[1]: run-netns-cni\x2df06262a9\x2d4797\x2d732b\x2d0dcb\x2dc409e1c79c65.mount: Deactivated successfully. Jan 14 01:09:07.019471 containerd[1618]: time="2026-01-14T01:09:07.017663755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785d8fcbf4-jxss5,Uid:42853ca5-c0c5-4fa1-ac72-7982d8343ca8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8717561f06c7b086f5503f502bf6a0643a2a24ac32eaefbc573e2c5d7f53385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:07.021656 kubelet[2817]: E0114 01:09:07.020868 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8717561f06c7b086f5503f502bf6a0643a2a24ac32eaefbc573e2c5d7f53385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:07.021656 kubelet[2817]: E0114 01:09:07.021341 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8717561f06c7b086f5503f502bf6a0643a2a24ac32eaefbc573e2c5d7f53385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-785d8fcbf4-jxss5" Jan 14 01:09:07.021656 kubelet[2817]: E0114 01:09:07.021370 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8717561f06c7b086f5503f502bf6a0643a2a24ac32eaefbc573e2c5d7f53385\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-785d8fcbf4-jxss5" Jan 14 01:09:07.022423 kubelet[2817]: E0114 01:09:07.021416 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-785d8fcbf4-jxss5_calico-system(42853ca5-c0c5-4fa1-ac72-7982d8343ca8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-785d8fcbf4-jxss5_calico-system(42853ca5-c0c5-4fa1-ac72-7982d8343ca8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8717561f06c7b086f5503f502bf6a0643a2a24ac32eaefbc573e2c5d7f53385\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-785d8fcbf4-jxss5" podUID="42853ca5-c0c5-4fa1-ac72-7982d8343ca8" Jan 14 01:09:07.035520 containerd[1618]: time="2026-01-14T01:09:07.035412499Z" level=error msg="Failed to destroy network for sandbox \"244e6b5755401d002a7fcbaa72336d238fef18a3d2ac1e309339aeedd1fdbfea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:07.040725 systemd[1]: run-netns-cni\x2dacd3b7b2\x2deef1\x2d9dbd\x2d6869\x2d2c6ad0ebf8d2.mount: Deactivated successfully. Jan 14 01:09:07.049877 containerd[1618]: time="2026-01-14T01:09:07.049491896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b64b7c788-2crrf,Uid:31330de6-3f41-4f44-bd94-776d84913764,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"244e6b5755401d002a7fcbaa72336d238fef18a3d2ac1e309339aeedd1fdbfea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:07.051842 kubelet[2817]: E0114 01:09:07.050552 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"244e6b5755401d002a7fcbaa72336d238fef18a3d2ac1e309339aeedd1fdbfea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:07.051842 kubelet[2817]: E0114 01:09:07.050633 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"244e6b5755401d002a7fcbaa72336d238fef18a3d2ac1e309339aeedd1fdbfea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" Jan 14 01:09:07.052486 kubelet[2817]: E0114 01:09:07.050662 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"244e6b5755401d002a7fcbaa72336d238fef18a3d2ac1e309339aeedd1fdbfea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" Jan 14 01:09:07.052486 kubelet[2817]: E0114 01:09:07.052440 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b64b7c788-2crrf_calico-apiserver(31330de6-3f41-4f44-bd94-776d84913764)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b64b7c788-2crrf_calico-apiserver(31330de6-3f41-4f44-bd94-776d84913764)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"244e6b5755401d002a7fcbaa72336d238fef18a3d2ac1e309339aeedd1fdbfea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:09:07.733657 containerd[1618]: time="2026-01-14T01:09:07.732798466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fd4f6b7c-sb5qb,Uid:f73e61d7-350a-471c-9476-00cd84fadf64,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:07.733657 containerd[1618]: time="2026-01-14T01:09:07.733197043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8zxlx,Uid:f6cd6e89-0e6a-47aa-ae56-5f40f27190c0,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:07.947430 containerd[1618]: time="2026-01-14T01:09:07.947369929Z" level=error msg="Failed to destroy network for sandbox \"aec0af32bd3ba77616fc63ba0954eb4663769a7a621c13e513e07090303bd33b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:07.956333 containerd[1618]: time="2026-01-14T01:09:07.955671140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8zxlx,Uid:f6cd6e89-0e6a-47aa-ae56-5f40f27190c0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec0af32bd3ba77616fc63ba0954eb4663769a7a621c13e513e07090303bd33b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:07.956605 systemd[1]: run-netns-cni\x2d347571b0\x2d8712\x2d086e\x2da5ea\x2d1664ea013dc2.mount: Deactivated successfully. Jan 14 01:09:07.957328 kubelet[2817]: E0114 01:09:07.957172 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec0af32bd3ba77616fc63ba0954eb4663769a7a621c13e513e07090303bd33b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:07.957328 kubelet[2817]: E0114 01:09:07.957264 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec0af32bd3ba77616fc63ba0954eb4663769a7a621c13e513e07090303bd33b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8zxlx" Jan 14 01:09:07.957328 kubelet[2817]: E0114 01:09:07.957291 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec0af32bd3ba77616fc63ba0954eb4663769a7a621c13e513e07090303bd33b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8zxlx" Jan 14 01:09:07.958603 kubelet[2817]: E0114 01:09:07.958251 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8zxlx_calico-system(f6cd6e89-0e6a-47aa-ae56-5f40f27190c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8zxlx_calico-system(f6cd6e89-0e6a-47aa-ae56-5f40f27190c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aec0af32bd3ba77616fc63ba0954eb4663769a7a621c13e513e07090303bd33b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:09:07.988145 containerd[1618]: time="2026-01-14T01:09:07.986322903Z" level=error msg="Failed to destroy network for sandbox \"e3084e18a98af2513eed602b04f700688805cc8153c130065b21f5f01920822c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:07.995662 systemd[1]: run-netns-cni\x2d015834be\x2d439a\x2ddecd\x2d2fc5\x2d000da2a58c69.mount: Deactivated successfully. Jan 14 01:09:07.999676 containerd[1618]: time="2026-01-14T01:09:07.999546549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fd4f6b7c-sb5qb,Uid:f73e61d7-350a-471c-9476-00cd84fadf64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3084e18a98af2513eed602b04f700688805cc8153c130065b21f5f01920822c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:08.000828 kubelet[2817]: E0114 01:09:08.000502 2817 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3084e18a98af2513eed602b04f700688805cc8153c130065b21f5f01920822c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:09:08.000828 kubelet[2817]: E0114 01:09:08.000585 2817 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3084e18a98af2513eed602b04f700688805cc8153c130065b21f5f01920822c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" Jan 14 01:09:08.000828 kubelet[2817]: E0114 01:09:08.000615 2817 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3084e18a98af2513eed602b04f700688805cc8153c130065b21f5f01920822c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" Jan 14 01:09:08.001377 kubelet[2817]: E0114 01:09:08.000680 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77fd4f6b7c-sb5qb_calico-system(f73e61d7-350a-471c-9476-00cd84fadf64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77fd4f6b7c-sb5qb_calico-system(f73e61d7-350a-471c-9476-00cd84fadf64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3084e18a98af2513eed602b04f700688805cc8153c130065b21f5f01920822c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:09:09.270165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657581477.mount: Deactivated successfully. Jan 14 01:09:09.397224 containerd[1618]: time="2026-01-14T01:09:09.397061937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:09.400436 containerd[1618]: time="2026-01-14T01:09:09.399706780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 01:09:09.402067 containerd[1618]: time="2026-01-14T01:09:09.402035414Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:09.407775 containerd[1618]: time="2026-01-14T01:09:09.407555513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:09:09.409542 containerd[1618]: time="2026-01-14T01:09:09.409420238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 13.823555167s" Jan 14 01:09:09.409542 containerd[1618]: time="2026-01-14T01:09:09.409502143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:09:09.434972 containerd[1618]: time="2026-01-14T01:09:09.434838385Z" level=info msg="CreateContainer within sandbox \"549cfb71c9635d6b0a390ec41baac6798d91f25ceec43afe80bf428fe6c9935e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:09:09.499713 containerd[1618]: time="2026-01-14T01:09:09.496455663Z" level=info msg="Container 0581e866f0efebe4df79c75d37ce55665c13e6ed5e94d6e8bf238bf8783f6e87: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:09:09.528121 containerd[1618]: time="2026-01-14T01:09:09.527559791Z" level=info msg="CreateContainer within sandbox \"549cfb71c9635d6b0a390ec41baac6798d91f25ceec43afe80bf428fe6c9935e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0581e866f0efebe4df79c75d37ce55665c13e6ed5e94d6e8bf238bf8783f6e87\"" Jan 14 01:09:09.531020 containerd[1618]: time="2026-01-14T01:09:09.528574162Z" level=info msg="StartContainer for \"0581e866f0efebe4df79c75d37ce55665c13e6ed5e94d6e8bf238bf8783f6e87\"" Jan 14 01:09:09.532217 containerd[1618]: time="2026-01-14T01:09:09.531576352Z" level=info msg="connecting to shim 0581e866f0efebe4df79c75d37ce55665c13e6ed5e94d6e8bf238bf8783f6e87" address="unix:///run/containerd/s/1d419724286ef7d944f39c0094d6988ff6e5fe7111fc8b452b735fa11a325900" protocol=ttrpc version=3 Jan 14 01:09:09.605362 systemd[1]: Started cri-containerd-0581e866f0efebe4df79c75d37ce55665c13e6ed5e94d6e8bf238bf8783f6e87.scope - libcontainer container 0581e866f0efebe4df79c75d37ce55665c13e6ed5e94d6e8bf238bf8783f6e87. Jan 14 01:09:09.727984 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:09:09.728111 kernel: audit: type=1334 audit(1768352949.723:571): prog-id=170 op=LOAD Jan 14 01:09:09.723000 audit: BPF prog-id=170 op=LOAD Jan 14 01:09:09.723000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3336 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:09.743025 kernel: audit: type=1300 audit(1768352949.723:571): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3336 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:09.743096 kernel: audit: type=1327 audit(1768352949.723:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035383165383636663065666562653464663739633735643337636535 Jan 14 01:09:09.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035383165383636663065666562653464663739633735643337636535 Jan 14 01:09:09.723000 audit: BPF prog-id=171 op=LOAD Jan 14 01:09:09.759281 kernel: audit: type=1334 audit(1768352949.723:572): prog-id=171 op=LOAD Jan 14 01:09:09.759368 kernel: audit: type=1300 audit(1768352949.723:572): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3336 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:09.723000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3336 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:09.774049 kernel: audit: type=1327 audit(1768352949.723:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035383165383636663065666562653464663739633735643337636535 Jan 14 01:09:09.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035383165383636663065666562653464663739633735643337636535 Jan 14 01:09:09.785246 kernel: audit: type=1334 audit(1768352949.723:573): prog-id=171 op=UNLOAD Jan 14 01:09:09.723000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:09:09.788772 kernel: audit: type=1300 audit(1768352949.723:573): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:09.723000 audit[4008]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:09.801148 kernel: audit: type=1327 audit(1768352949.723:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035383165383636663065666562653464663739633735643337636535 Jan 14 01:09:09.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035383165383636663065666562653464663739633735643337636535 Jan 14 01:09:09.817817 kernel: audit: type=1334 audit(1768352949.723:574): prog-id=170 op=UNLOAD Jan 14 01:09:09.723000 audit: BPF prog-id=170 op=UNLOAD Jan 14 01:09:09.723000 audit[4008]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:09.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035383165383636663065666562653464663739633735643337636535 Jan 14 01:09:09.723000 audit: BPF prog-id=172 op=LOAD Jan 14 01:09:09.723000 audit[4008]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3336 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:09.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035383165383636663065666562653464663739633735643337636535 Jan 14 01:09:09.828086 containerd[1618]: time="2026-01-14T01:09:09.827053616Z" level=info msg="StartContainer for \"0581e866f0efebe4df79c75d37ce55665c13e6ed5e94d6e8bf238bf8783f6e87\" returns successfully" Jan 14 01:09:09.989944 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:09:09.990183 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:09:10.352330 kubelet[2817]: I0114 01:09:10.352239 2817 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-whisker-ca-bundle\") pod \"42853ca5-c0c5-4fa1-ac72-7982d8343ca8\" (UID: \"42853ca5-c0c5-4fa1-ac72-7982d8343ca8\") " Jan 14 01:09:10.353012 kubelet[2817]: I0114 01:09:10.352360 2817 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-whisker-backend-key-pair\") pod \"42853ca5-c0c5-4fa1-ac72-7982d8343ca8\" (UID: \"42853ca5-c0c5-4fa1-ac72-7982d8343ca8\") " Jan 14 01:09:10.353012 kubelet[2817]: I0114 01:09:10.352401 2817 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ch2n\" (UniqueName: \"kubernetes.io/projected/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-kube-api-access-9ch2n\") pod \"42853ca5-c0c5-4fa1-ac72-7982d8343ca8\" (UID: \"42853ca5-c0c5-4fa1-ac72-7982d8343ca8\") " Jan 14 01:09:10.353791 kubelet[2817]: I0114 01:09:10.353707 2817 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "42853ca5-c0c5-4fa1-ac72-7982d8343ca8" (UID: "42853ca5-c0c5-4fa1-ac72-7982d8343ca8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:09:10.363332 kubelet[2817]: I0114 01:09:10.363250 2817 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "42853ca5-c0c5-4fa1-ac72-7982d8343ca8" (UID: "42853ca5-c0c5-4fa1-ac72-7982d8343ca8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:09:10.364485 systemd[1]: var-lib-kubelet-pods-42853ca5\x2dc0c5\x2d4fa1\x2dac72\x2d7982d8343ca8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9ch2n.mount: Deactivated successfully. Jan 14 01:09:10.364682 systemd[1]: var-lib-kubelet-pods-42853ca5\x2dc0c5\x2d4fa1\x2dac72\x2d7982d8343ca8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:09:10.365226 kubelet[2817]: I0114 01:09:10.364678 2817 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-kube-api-access-9ch2n" (OuterVolumeSpecName: "kube-api-access-9ch2n") pod "42853ca5-c0c5-4fa1-ac72-7982d8343ca8" (UID: "42853ca5-c0c5-4fa1-ac72-7982d8343ca8"). InnerVolumeSpecName "kube-api-access-9ch2n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:09:10.454007 kubelet[2817]: I0114 01:09:10.453715 2817 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9ch2n\" (UniqueName: \"kubernetes.io/projected/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-kube-api-access-9ch2n\") on node \"localhost\" DevicePath \"\"" Jan 14 01:09:10.454007 kubelet[2817]: I0114 01:09:10.453836 2817 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 14 01:09:10.454007 kubelet[2817]: I0114 01:09:10.453851 2817 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/42853ca5-c0c5-4fa1-ac72-7982d8343ca8-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 14 01:09:10.726723 kubelet[2817]: E0114 01:09:10.726683 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:10.735952 kubelet[2817]: E0114 01:09:10.733657 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:10.736123 containerd[1618]: time="2026-01-14T01:09:10.735308688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4h48,Uid:8cce9086-b715-4037-a543-6640cc00d61c,Namespace:kube-system,Attempt:0,}" Jan 14 01:09:10.740513 containerd[1618]: time="2026-01-14T01:09:10.737831427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b64b7c788-f89f6,Uid:b1255f7d-606b-4b44-9160-25a609a72f97,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:09:10.760125 systemd[1]: Removed slice kubepods-besteffort-pod42853ca5_c0c5_4fa1_ac72_7982d8343ca8.slice - libcontainer container kubepods-besteffort-pod42853ca5_c0c5_4fa1_ac72_7982d8343ca8.slice. Jan 14 01:09:10.793010 kubelet[2817]: I0114 01:09:10.792824 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ccsn7" podStartSLOduration=2.7072652230000003 podStartE2EDuration="31.792804131s" podCreationTimestamp="2026-01-14 01:08:39 +0000 UTC" firstStartedPulling="2026-01-14 01:08:40.325319612 +0000 UTC m=+32.752719466" lastFinishedPulling="2026-01-14 01:09:09.41085852 +0000 UTC m=+61.838258374" observedRunningTime="2026-01-14 01:09:10.791858646 +0000 UTC m=+63.219258500" watchObservedRunningTime="2026-01-14 01:09:10.792804131 +0000 UTC m=+63.220204005" Jan 14 01:09:11.065619 systemd[1]: Created slice kubepods-besteffort-pod0a01ad60_6870_4247_94ac_0665fa604563.slice - libcontainer container kubepods-besteffort-pod0a01ad60_6870_4247_94ac_0665fa604563.slice. Jan 14 01:09:11.162998 kubelet[2817]: I0114 01:09:11.162332 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a01ad60-6870-4247-94ac-0665fa604563-whisker-ca-bundle\") pod \"whisker-548b5c59b-jhbn9\" (UID: \"0a01ad60-6870-4247-94ac-0665fa604563\") " pod="calico-system/whisker-548b5c59b-jhbn9" Jan 14 01:09:11.162998 kubelet[2817]: I0114 01:09:11.162395 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxjn6\" (UniqueName: \"kubernetes.io/projected/0a01ad60-6870-4247-94ac-0665fa604563-kube-api-access-wxjn6\") pod \"whisker-548b5c59b-jhbn9\" (UID: \"0a01ad60-6870-4247-94ac-0665fa604563\") " pod="calico-system/whisker-548b5c59b-jhbn9" Jan 14 01:09:11.162998 kubelet[2817]: I0114 01:09:11.162427 2817 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a01ad60-6870-4247-94ac-0665fa604563-whisker-backend-key-pair\") pod \"whisker-548b5c59b-jhbn9\" (UID: \"0a01ad60-6870-4247-94ac-0665fa604563\") " pod="calico-system/whisker-548b5c59b-jhbn9" Jan 14 01:09:11.380346 containerd[1618]: time="2026-01-14T01:09:11.379635162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548b5c59b-jhbn9,Uid:0a01ad60-6870-4247-94ac-0665fa604563,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:11.514100 systemd-networkd[1504]: cali7b9cc24fff0: Link UP Jan 14 01:09:11.520151 systemd-networkd[1504]: cali7b9cc24fff0: Gained carrier Jan 14 01:09:11.580175 containerd[1618]: 2026-01-14 01:09:10.882 [INFO][4086] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:09:11.580175 containerd[1618]: 2026-01-14 01:09:10.982 [INFO][4086] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0 calico-apiserver-b64b7c788- calico-apiserver b1255f7d-606b-4b44-9160-25a609a72f97 911 0 2026-01-14 01:08:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b64b7c788 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b64b7c788-f89f6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7b9cc24fff0 [] [] }} ContainerID="88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-f89f6" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--f89f6-" Jan 14 01:09:11.580175 containerd[1618]: 2026-01-14 01:09:10.983 [INFO][4086] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-f89f6" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0" Jan 14 01:09:11.580175 containerd[1618]: 2026-01-14 01:09:11.292 [INFO][4119] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" HandleID="k8s-pod-network.88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" Workload="localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0" Jan 14 01:09:11.581120 containerd[1618]: 2026-01-14 01:09:11.293 [INFO][4119] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" HandleID="k8s-pod-network.88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" Workload="localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d0370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b64b7c788-f89f6", "timestamp":"2026-01-14 01:09:11.292692647 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:09:11.581120 containerd[1618]: 2026-01-14 01:09:11.293 [INFO][4119] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:09:11.581120 containerd[1618]: 2026-01-14 01:09:11.294 [INFO][4119] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:09:11.581120 containerd[1618]: 2026-01-14 01:09:11.294 [INFO][4119] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:09:11.581120 containerd[1618]: 2026-01-14 01:09:11.337 [INFO][4119] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" host="localhost" Jan 14 01:09:11.581120 containerd[1618]: 2026-01-14 01:09:11.365 [INFO][4119] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:09:11.581120 containerd[1618]: 2026-01-14 01:09:11.381 [INFO][4119] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:09:11.581120 containerd[1618]: 2026-01-14 01:09:11.387 [INFO][4119] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:11.581120 containerd[1618]: 2026-01-14 01:09:11.393 [INFO][4119] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:11.581120 containerd[1618]: 2026-01-14 01:09:11.393 [INFO][4119] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" host="localhost" Jan 14 01:09:11.581524 containerd[1618]: 2026-01-14 01:09:11.401 [INFO][4119] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7 Jan 14 01:09:11.581524 containerd[1618]: 2026-01-14 01:09:11.432 [INFO][4119] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" host="localhost" Jan 14 01:09:11.581524 containerd[1618]: 2026-01-14 01:09:11.450 [INFO][4119] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" host="localhost" Jan 14 01:09:11.581524 containerd[1618]: 2026-01-14 01:09:11.450 [INFO][4119] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" host="localhost" Jan 14 01:09:11.581524 containerd[1618]: 2026-01-14 01:09:11.450 [INFO][4119] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:09:11.581524 containerd[1618]: 2026-01-14 01:09:11.450 [INFO][4119] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" HandleID="k8s-pod-network.88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" Workload="localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0" Jan 14 01:09:11.581729 containerd[1618]: 2026-01-14 01:09:11.460 [INFO][4086] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-f89f6" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0", GenerateName:"calico-apiserver-b64b7c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"b1255f7d-606b-4b44-9160-25a609a72f97", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b64b7c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b64b7c788-f89f6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b9cc24fff0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:11.582971 containerd[1618]: 2026-01-14 01:09:11.460 [INFO][4086] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-f89f6" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0" Jan 14 01:09:11.582971 containerd[1618]: 2026-01-14 01:09:11.468 [INFO][4086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b9cc24fff0 ContainerID="88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-f89f6" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0" Jan 14 01:09:11.582971 containerd[1618]: 2026-01-14 01:09:11.527 [INFO][4086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-f89f6" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0" Jan 14 01:09:11.583095 containerd[1618]: 2026-01-14 01:09:11.531 [INFO][4086] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-f89f6" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0", GenerateName:"calico-apiserver-b64b7c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"b1255f7d-606b-4b44-9160-25a609a72f97", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b64b7c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7", Pod:"calico-apiserver-b64b7c788-f89f6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7b9cc24fff0", MAC:"3e:0d:94:b5:fb:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:11.583243 containerd[1618]: 2026-01-14 01:09:11.565 [INFO][4086] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-f89f6" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--f89f6-eth0" Jan 14 01:09:11.736309 kubelet[2817]: E0114 01:09:11.733007 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:11.743531 systemd-networkd[1504]: calibd0dde9fe19: Link UP Jan 14 01:09:11.745179 systemd-networkd[1504]: calibd0dde9fe19: Gained carrier Jan 14 01:09:11.746719 containerd[1618]: time="2026-01-14T01:09:11.745414925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8jc6,Uid:ba6f7f37-698f-4697-a408-a3efabbcf48e,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:11.746719 containerd[1618]: time="2026-01-14T01:09:11.745553308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7hzf,Uid:578dbb0c-1de2-46fa-a95a-68290169880c,Namespace:kube-system,Attempt:0,}" Jan 14 01:09:11.750534 kubelet[2817]: I0114 01:09:11.749650 2817 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42853ca5-c0c5-4fa1-ac72-7982d8343ca8" path="/var/lib/kubelet/pods/42853ca5-c0c5-4fa1-ac72-7982d8343ca8/volumes" Jan 14 01:09:11.766642 kubelet[2817]: E0114 01:09:11.766412 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:11.825388 containerd[1618]: 2026-01-14 01:09:10.862 [INFO][4074] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:09:11.825388 containerd[1618]: 2026-01-14 01:09:10.984 [INFO][4074] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--d4h48-eth0 coredns-668d6bf9bc- kube-system 8cce9086-b715-4037-a543-6640cc00d61c 901 0 2026-01-14 01:08:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-d4h48 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibd0dde9fe19 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4h48" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4h48-" Jan 14 01:09:11.825388 containerd[1618]: 2026-01-14 01:09:10.984 [INFO][4074] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4h48" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4h48-eth0" Jan 14 01:09:11.825388 containerd[1618]: 2026-01-14 01:09:11.292 [INFO][4126] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" HandleID="k8s-pod-network.4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" Workload="localhost-k8s-coredns--668d6bf9bc--d4h48-eth0" Jan 14 01:09:11.825659 containerd[1618]: 2026-01-14 01:09:11.293 [INFO][4126] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" HandleID="k8s-pod-network.4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" Workload="localhost-k8s-coredns--668d6bf9bc--d4h48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00018e3d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-d4h48", "timestamp":"2026-01-14 01:09:11.292007751 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:09:11.825659 containerd[1618]: 2026-01-14 01:09:11.294 [INFO][4126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:09:11.825659 containerd[1618]: 2026-01-14 01:09:11.450 [INFO][4126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:09:11.825659 containerd[1618]: 2026-01-14 01:09:11.450 [INFO][4126] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:09:11.825659 containerd[1618]: 2026-01-14 01:09:11.479 [INFO][4126] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" host="localhost" Jan 14 01:09:11.825659 containerd[1618]: 2026-01-14 01:09:11.548 [INFO][4126] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:09:11.825659 containerd[1618]: 2026-01-14 01:09:11.583 [INFO][4126] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:09:11.825659 containerd[1618]: 2026-01-14 01:09:11.591 [INFO][4126] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:11.825659 containerd[1618]: 2026-01-14 01:09:11.601 [INFO][4126] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:11.825659 containerd[1618]: 2026-01-14 01:09:11.604 [INFO][4126] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" host="localhost" Jan 14 01:09:11.826009 containerd[1618]: 2026-01-14 01:09:11.611 [INFO][4126] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf Jan 14 01:09:11.826009 containerd[1618]: 2026-01-14 01:09:11.630 [INFO][4126] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" host="localhost" Jan 14 01:09:11.826009 containerd[1618]: 2026-01-14 01:09:11.679 [INFO][4126] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" host="localhost" Jan 14 01:09:11.826009 containerd[1618]: 2026-01-14 01:09:11.682 [INFO][4126] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" host="localhost" Jan 14 01:09:11.826009 containerd[1618]: 2026-01-14 01:09:11.684 [INFO][4126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:09:11.826009 containerd[1618]: 2026-01-14 01:09:11.695 [INFO][4126] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" HandleID="k8s-pod-network.4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" Workload="localhost-k8s-coredns--668d6bf9bc--d4h48-eth0" Jan 14 01:09:11.826117 containerd[1618]: 2026-01-14 01:09:11.738 [INFO][4074] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4h48" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4h48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d4h48-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cce9086-b715-4037-a543-6640cc00d61c", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-d4h48", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd0dde9fe19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:11.826242 containerd[1618]: 2026-01-14 01:09:11.739 [INFO][4074] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4h48" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4h48-eth0" Jan 14 01:09:11.826242 containerd[1618]: 2026-01-14 01:09:11.739 [INFO][4074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd0dde9fe19 ContainerID="4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4h48" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4h48-eth0" Jan 14 01:09:11.826242 containerd[1618]: 2026-01-14 01:09:11.750 [INFO][4074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4h48" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4h48-eth0" Jan 14 01:09:11.826310 containerd[1618]: 2026-01-14 01:09:11.751 [INFO][4074] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4h48" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4h48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--d4h48-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8cce9086-b715-4037-a543-6640cc00d61c", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf", Pod:"coredns-668d6bf9bc-d4h48", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd0dde9fe19", MAC:"2e:68:6a:f2:83:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:11.826310 containerd[1618]: 2026-01-14 01:09:11.796 [INFO][4074] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4h48" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--d4h48-eth0" Jan 14 01:09:12.070217 containerd[1618]: time="2026-01-14T01:09:12.069985671Z" level=info msg="connecting to shim 88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7" address="unix:///run/containerd/s/b2b314efd7256cd29a9205daceb5af570d0322d8635df1699123c0c0f19880a2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:09:12.145058 containerd[1618]: time="2026-01-14T01:09:12.144199483Z" level=info msg="connecting to shim 4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf" address="unix:///run/containerd/s/78da572aaa1d021c5f7b957b8e191080c9cc0d7332ea045f7d9986bd0768ecc1" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:09:12.199169 systemd[1]: Started cri-containerd-88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7.scope - libcontainer container 88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7. Jan 14 01:09:12.211399 systemd-networkd[1504]: calie57204e558b: Link UP Jan 14 01:09:12.212568 systemd-networkd[1504]: calie57204e558b: Gained carrier Jan 14 01:09:12.274201 systemd[1]: Started cri-containerd-4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf.scope - libcontainer container 4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf. Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:11.504 [INFO][4157] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:11.573 [INFO][4157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--548b5c59b--jhbn9-eth0 whisker-548b5c59b- calico-system 0a01ad60-6870-4247-94ac-0665fa604563 1002 0 2026-01-14 01:09:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:548b5c59b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-548b5c59b-jhbn9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie57204e558b [] [] }} ContainerID="567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" Namespace="calico-system" Pod="whisker-548b5c59b-jhbn9" WorkloadEndpoint="localhost-k8s-whisker--548b5c59b--jhbn9-" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:11.573 [INFO][4157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" Namespace="calico-system" Pod="whisker-548b5c59b-jhbn9" WorkloadEndpoint="localhost-k8s-whisker--548b5c59b--jhbn9-eth0" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:11.826 [INFO][4174] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" HandleID="k8s-pod-network.567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" Workload="localhost-k8s-whisker--548b5c59b--jhbn9-eth0" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:11.844 [INFO][4174] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" HandleID="k8s-pod-network.567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" Workload="localhost-k8s-whisker--548b5c59b--jhbn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004719d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-548b5c59b-jhbn9", "timestamp":"2026-01-14 01:09:11.826549845 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:11.844 [INFO][4174] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:11.844 [INFO][4174] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:11.844 [INFO][4174] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:11.921 [INFO][4174] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" host="localhost" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:11.958 [INFO][4174] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:11.993 [INFO][4174] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:12.008 [INFO][4174] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:12.064 [INFO][4174] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:12.064 [INFO][4174] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" host="localhost" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:12.085 [INFO][4174] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:12.106 [INFO][4174] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" host="localhost" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:12.153 [INFO][4174] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" host="localhost" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:12.153 [INFO][4174] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" host="localhost" Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:12.154 [INFO][4174] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:09:12.281952 containerd[1618]: 2026-01-14 01:09:12.154 [INFO][4174] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" HandleID="k8s-pod-network.567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" Workload="localhost-k8s-whisker--548b5c59b--jhbn9-eth0" Jan 14 01:09:12.285129 containerd[1618]: 2026-01-14 01:09:12.189 [INFO][4157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" Namespace="calico-system" Pod="whisker-548b5c59b-jhbn9" WorkloadEndpoint="localhost-k8s-whisker--548b5c59b--jhbn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--548b5c59b--jhbn9-eth0", GenerateName:"whisker-548b5c59b-", Namespace:"calico-system", SelfLink:"", UID:"0a01ad60-6870-4247-94ac-0665fa604563", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"548b5c59b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-548b5c59b-jhbn9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie57204e558b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:12.285129 containerd[1618]: 2026-01-14 01:09:12.191 [INFO][4157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" Namespace="calico-system" Pod="whisker-548b5c59b-jhbn9" WorkloadEndpoint="localhost-k8s-whisker--548b5c59b--jhbn9-eth0" Jan 14 01:09:12.285129 containerd[1618]: 2026-01-14 01:09:12.191 [INFO][4157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie57204e558b ContainerID="567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" Namespace="calico-system" Pod="whisker-548b5c59b-jhbn9" WorkloadEndpoint="localhost-k8s-whisker--548b5c59b--jhbn9-eth0" Jan 14 01:09:12.285129 containerd[1618]: 2026-01-14 01:09:12.215 [INFO][4157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" Namespace="calico-system" Pod="whisker-548b5c59b-jhbn9" WorkloadEndpoint="localhost-k8s-whisker--548b5c59b--jhbn9-eth0" Jan 14 01:09:12.285129 containerd[1618]: 2026-01-14 01:09:12.216 [INFO][4157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" Namespace="calico-system" Pod="whisker-548b5c59b-jhbn9" WorkloadEndpoint="localhost-k8s-whisker--548b5c59b--jhbn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--548b5c59b--jhbn9-eth0", GenerateName:"whisker-548b5c59b-", Namespace:"calico-system", SelfLink:"", UID:"0a01ad60-6870-4247-94ac-0665fa604563", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"548b5c59b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b", Pod:"whisker-548b5c59b-jhbn9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie57204e558b", MAC:"52:17:28:22:ff:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:12.285129 containerd[1618]: 2026-01-14 01:09:12.276 [INFO][4157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" Namespace="calico-system" Pod="whisker-548b5c59b-jhbn9" WorkloadEndpoint="localhost-k8s-whisker--548b5c59b--jhbn9-eth0" Jan 14 01:09:12.403000 audit: BPF prog-id=173 op=LOAD Jan 14 01:09:12.404000 audit: BPF prog-id=174 op=LOAD Jan 14 01:09:12.404000 audit[4370]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4343 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838636663623131353134396539316531363931303836333836333132 Jan 14 01:09:12.404000 audit: BPF prog-id=174 op=UNLOAD Jan 14 01:09:12.404000 audit[4370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4343 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838636663623131353134396539316531363931303836333836333132 Jan 14 01:09:12.404000 audit: BPF prog-id=175 op=LOAD Jan 14 01:09:12.404000 audit[4370]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4343 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838636663623131353134396539316531363931303836333836333132 Jan 14 01:09:12.404000 audit: BPF prog-id=176 op=LOAD Jan 14 01:09:12.404000 audit[4370]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4343 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838636663623131353134396539316531363931303836333836333132 Jan 14 01:09:12.404000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:09:12.404000 audit[4370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4343 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838636663623131353134396539316531363931303836333836333132 Jan 14 01:09:12.404000 audit: BPF prog-id=175 op=UNLOAD Jan 14 01:09:12.404000 audit[4370]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4343 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838636663623131353134396539316531363931303836333836333132 Jan 14 01:09:12.405000 audit: BPF prog-id=177 op=LOAD Jan 14 01:09:12.405000 audit[4370]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4343 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838636663623131353134396539316531363931303836333836333132 Jan 14 01:09:12.409000 audit: BPF prog-id=178 op=LOAD Jan 14 01:09:12.411000 audit: BPF prog-id=179 op=LOAD Jan 14 01:09:12.411000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4368 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343533386330356666306536316136663135646535633233633237 Jan 14 01:09:12.411000 audit: BPF prog-id=179 op=UNLOAD Jan 14 01:09:12.411000 audit[4403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343533386330356666306536316136663135646535633233633237 Jan 14 01:09:12.411000 audit: BPF prog-id=180 op=LOAD Jan 14 01:09:12.411000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4368 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343533386330356666306536316136663135646535633233633237 Jan 14 01:09:12.412000 audit: BPF prog-id=181 op=LOAD Jan 14 01:09:12.412000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4368 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343533386330356666306536316136663135646535633233633237 Jan 14 01:09:12.412000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:09:12.412000 audit[4403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343533386330356666306536316136663135646535633233633237 Jan 14 01:09:12.412000 audit: BPF prog-id=180 op=UNLOAD Jan 14 01:09:12.412000 audit[4403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343533386330356666306536316136663135646535633233633237 Jan 14 01:09:12.412000 audit: BPF prog-id=182 op=LOAD Jan 14 01:09:12.412000 audit[4403]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4368 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343533386330356666306536316136663135646535633233633237 Jan 14 01:09:12.412448 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:09:12.415477 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:09:12.439345 containerd[1618]: time="2026-01-14T01:09:12.439115984Z" level=info msg="connecting to shim 567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b" address="unix:///run/containerd/s/1e096ddd0e7de0359dc3a66e2b53bc1e5a1ef92e33ff445e7e3db7245581224d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:09:12.573611 systemd[1]: Started cri-containerd-567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b.scope - libcontainer container 567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b. Jan 14 01:09:12.611282 containerd[1618]: time="2026-01-14T01:09:12.610362759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4h48,Uid:8cce9086-b715-4037-a543-6640cc00d61c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf\"" Jan 14 01:09:12.615807 kubelet[2817]: E0114 01:09:12.615676 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:12.655627 containerd[1618]: time="2026-01-14T01:09:12.652388026Z" level=info msg="CreateContainer within sandbox \"4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:09:12.676000 audit: BPF prog-id=183 op=LOAD Jan 14 01:09:12.678000 audit: BPF prog-id=184 op=LOAD Jan 14 01:09:12.678000 audit[4469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4456 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536376436333439366161373033376461323731656231313263396230 Jan 14 01:09:12.678000 audit: BPF prog-id=184 op=UNLOAD Jan 14 01:09:12.678000 audit[4469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4456 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536376436333439366161373033376461323731656231313263396230 Jan 14 01:09:12.680000 audit: BPF prog-id=185 op=LOAD Jan 14 01:09:12.680000 audit[4469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4456 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536376436333439366161373033376461323731656231313263396230 Jan 14 01:09:12.680000 audit: BPF prog-id=186 op=LOAD Jan 14 01:09:12.680000 audit[4469]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4456 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536376436333439366161373033376461323731656231313263396230 Jan 14 01:09:12.681000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:09:12.681000 audit[4469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4456 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536376436333439366161373033376461323731656231313263396230 Jan 14 01:09:12.681000 audit: BPF prog-id=185 op=UNLOAD Jan 14 01:09:12.681000 audit[4469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4456 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536376436333439366161373033376461323731656231313263396230 Jan 14 01:09:12.681000 audit: BPF prog-id=187 op=LOAD Jan 14 01:09:12.681000 audit[4469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4456 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536376436333439366161373033376461323731656231313263396230 Jan 14 01:09:12.696213 systemd-networkd[1504]: cali3dbde375df7: Link UP Jan 14 01:09:12.699664 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:09:12.702472 systemd-networkd[1504]: cali3dbde375df7: Gained carrier Jan 14 01:09:12.704497 containerd[1618]: time="2026-01-14T01:09:12.703205899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b64b7c788-f89f6,Uid:b1255f7d-606b-4b44-9160-25a609a72f97,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"88cfcb115149e91e169108638631231b3f8fbc66f6bbffc3110f88d5cba7b0d7\"" Jan 14 01:09:12.729638 containerd[1618]: time="2026-01-14T01:09:12.726086357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:09:12.755072 containerd[1618]: time="2026-01-14T01:09:12.754634531Z" level=info msg="Container a99874eb5d954b6d3a16ea6fad9cdb6a163000bd33aac30113b0991398e6cfd5: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:09:12.766195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910544096.mount: Deactivated successfully. Jan 14 01:09:12.805981 containerd[1618]: time="2026-01-14T01:09:12.804383018Z" level=info msg="CreateContainer within sandbox \"4d4538c05ff0e61a6f15de5c23c2742f882d2e39ff7475621acfbe5816cd4cdf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a99874eb5d954b6d3a16ea6fad9cdb6a163000bd33aac30113b0991398e6cfd5\"" Jan 14 01:09:12.807819 containerd[1618]: time="2026-01-14T01:09:12.807623996Z" level=info msg="StartContainer for \"a99874eb5d954b6d3a16ea6fad9cdb6a163000bd33aac30113b0991398e6cfd5\"" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.038 [INFO][4296] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.161 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0 coredns-668d6bf9bc- kube-system 578dbb0c-1de2-46fa-a95a-68290169880c 909 0 2026-01-14 01:08:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-h7hzf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3dbde375df7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7hzf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h7hzf-" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.170 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7hzf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.358 [INFO][4419] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" HandleID="k8s-pod-network.7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" Workload="localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.358 [INFO][4419] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" HandleID="k8s-pod-network.7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" Workload="localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004dba80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-h7hzf", "timestamp":"2026-01-14 01:09:12.358302403 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.359 [INFO][4419] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.359 [INFO][4419] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.359 [INFO][4419] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.410 [INFO][4419] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" host="localhost" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.440 [INFO][4419] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.460 [INFO][4419] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.475 [INFO][4419] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.502 [INFO][4419] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.504 [INFO][4419] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" host="localhost" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.514 [INFO][4419] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771 Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.562 [INFO][4419] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" host="localhost" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.633 [INFO][4419] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" host="localhost" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.633 [INFO][4419] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" host="localhost" Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.633 [INFO][4419] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:09:12.808450 containerd[1618]: 2026-01-14 01:09:12.633 [INFO][4419] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" HandleID="k8s-pod-network.7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" Workload="localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0" Jan 14 01:09:12.814305 containerd[1618]: 2026-01-14 01:09:12.673 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7hzf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"578dbb0c-1de2-46fa-a95a-68290169880c", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-h7hzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dbde375df7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:12.814305 containerd[1618]: 2026-01-14 01:09:12.684 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7hzf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0" Jan 14 01:09:12.814305 containerd[1618]: 2026-01-14 01:09:12.684 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dbde375df7 ContainerID="7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7hzf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0" Jan 14 01:09:12.814305 containerd[1618]: 2026-01-14 01:09:12.706 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7hzf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0" Jan 14 01:09:12.814305 containerd[1618]: 2026-01-14 01:09:12.711 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7hzf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"578dbb0c-1de2-46fa-a95a-68290169880c", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771", Pod:"coredns-668d6bf9bc-h7hzf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3dbde375df7", MAC:"c6:cf:51:13:ba:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:12.814305 containerd[1618]: 2026-01-14 01:09:12.781 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7hzf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h7hzf-eth0" Jan 14 01:09:12.814305 containerd[1618]: time="2026-01-14T01:09:12.810133268Z" level=info msg="connecting to shim a99874eb5d954b6d3a16ea6fad9cdb6a163000bd33aac30113b0991398e6cfd5" address="unix:///run/containerd/s/78da572aaa1d021c5f7b957b8e191080c9cc0d7332ea045f7d9986bd0768ecc1" protocol=ttrpc version=3 Jan 14 01:09:12.848152 containerd[1618]: time="2026-01-14T01:09:12.847724531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:12.851730 containerd[1618]: time="2026-01-14T01:09:12.851691941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:09:12.852139 containerd[1618]: time="2026-01-14T01:09:12.852010832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:12.853004 kubelet[2817]: E0114 01:09:12.852681 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:09:12.853004 kubelet[2817]: E0114 01:09:12.852830 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:09:12.856208 systemd-networkd[1504]: cali7b9cc24fff0: Gained IPv6LL Jan 14 01:09:12.858105 kubelet[2817]: E0114 01:09:12.855007 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8x65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b64b7c788-f89f6_calico-apiserver(b1255f7d-606b-4b44-9160-25a609a72f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:12.858105 kubelet[2817]: E0114 01:09:12.856542 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:09:12.923265 systemd[1]: Started cri-containerd-a99874eb5d954b6d3a16ea6fad9cdb6a163000bd33aac30113b0991398e6cfd5.scope - libcontainer container a99874eb5d954b6d3a16ea6fad9cdb6a163000bd33aac30113b0991398e6cfd5. Jan 14 01:09:12.948420 containerd[1618]: time="2026-01-14T01:09:12.947860941Z" level=info msg="connecting to shim 7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771" address="unix:///run/containerd/s/8c87d95a332cdd1d8f8e5f439de19ea8f20de6481a5e1d72cb71f1a9dab59262" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:09:12.953237 systemd-networkd[1504]: calibe500d7ebe1: Link UP Jan 14 01:09:12.961558 systemd-networkd[1504]: calibe500d7ebe1: Gained carrier Jan 14 01:09:12.984000 audit: BPF prog-id=188 op=LOAD Jan 14 01:09:12.984000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7b2684f0 a2=98 a3=1fffffffffffffff items=0 ppid=4194 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.984000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:09:12.984000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:09:12.984000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff7b2684c0 a3=0 items=0 ppid=4194 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.984000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:09:12.986000 audit: BPF prog-id=189 op=LOAD Jan 14 01:09:12.986000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7b2683d0 a2=94 a3=3 items=0 ppid=4194 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.986000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:09:12.986000 audit: BPF prog-id=189 op=UNLOAD Jan 14 01:09:12.986000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff7b2683d0 a2=94 a3=3 items=0 ppid=4194 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.986000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:09:12.986000 audit: BPF prog-id=190 op=LOAD Jan 14 01:09:12.986000 audit[4584]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff7b268410 a2=94 a3=7fff7b2685f0 items=0 ppid=4194 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.986000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:09:12.986000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:09:12.986000 audit[4584]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff7b268410 a2=94 a3=7fff7b2685f0 items=0 ppid=4194 pid=4584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.986000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:09:12.989000 audit: BPF prog-id=191 op=LOAD Jan 14 01:09:12.989000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb42a0700 a2=98 a3=3 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.989000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:12.989000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:09:12.989000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeb42a06d0 a3=0 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.989000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:12.990000 audit: BPF prog-id=192 op=LOAD Jan 14 01:09:12.990000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb42a04f0 a2=94 a3=54428f items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.990000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:12.990000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:09:12.990000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeb42a04f0 a2=94 a3=54428f items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.990000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:12.990000 audit: BPF prog-id=193 op=LOAD Jan 14 01:09:12.990000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb42a0520 a2=94 a3=2 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.990000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:12.990000 audit: BPF prog-id=193 op=UNLOAD Jan 14 01:09:12.990000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeb42a0520 a2=0 a3=2 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.990000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:12.993000 audit: BPF prog-id=194 op=LOAD Jan 14 01:09:12.994000 audit: BPF prog-id=195 op=LOAD Jan 14 01:09:12.994000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4368 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139393837346562356439353462366433613136656136666164396364 Jan 14 01:09:12.994000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:09:12.994000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139393837346562356439353462366433613136656136666164396364 Jan 14 01:09:12.994000 audit: BPF prog-id=196 op=LOAD Jan 14 01:09:12.994000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4368 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139393837346562356439353462366433613136656136666164396364 Jan 14 01:09:12.995000 audit: BPF prog-id=197 op=LOAD Jan 14 01:09:12.995000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4368 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139393837346562356439353462366433613136656136666164396364 Jan 14 01:09:12.995000 audit: BPF prog-id=197 op=UNLOAD Jan 14 01:09:12.995000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139393837346562356439353462366433613136656136666164396364 Jan 14 01:09:12.995000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:09:12.995000 audit[4533]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139393837346562356439353462366433613136656136666164396364 Jan 14 01:09:12.995000 audit: BPF prog-id=198 op=LOAD Jan 14 01:09:12.995000 audit[4533]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4368 pid=4533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:12.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139393837346562356439353462366433613136656136666164396364 Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.018 [INFO][4279] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.108 [INFO][4279] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--q8jc6-eth0 csi-node-driver- calico-system ba6f7f37-698f-4697-a408-a3efabbcf48e 787 0 2026-01-14 01:08:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-q8jc6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibe500d7ebe1 [] [] }} ContainerID="01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" Namespace="calico-system" Pod="csi-node-driver-q8jc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--q8jc6-" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.112 [INFO][4279] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" Namespace="calico-system" Pod="csi-node-driver-q8jc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--q8jc6-eth0" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.479 [INFO][4377] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" HandleID="k8s-pod-network.01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" Workload="localhost-k8s-csi--node--driver--q8jc6-eth0" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.482 [INFO][4377] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" HandleID="k8s-pod-network.01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" Workload="localhost-k8s-csi--node--driver--q8jc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000416e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-q8jc6", "timestamp":"2026-01-14 01:09:12.479047832 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.483 [INFO][4377] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.633 [INFO][4377] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.633 [INFO][4377] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.686 [INFO][4377] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" host="localhost" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.734 [INFO][4377] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.785 [INFO][4377] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.791 [INFO][4377] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.796 [INFO][4377] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.798 [INFO][4377] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" host="localhost" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.804 [INFO][4377] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162 Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.832 [INFO][4377] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" host="localhost" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.871 [INFO][4377] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" host="localhost" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.890 [INFO][4377] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" host="localhost" Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.891 [INFO][4377] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:09:13.011167 containerd[1618]: 2026-01-14 01:09:12.892 [INFO][4377] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" HandleID="k8s-pod-network.01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" Workload="localhost-k8s-csi--node--driver--q8jc6-eth0" Jan 14 01:09:13.012089 containerd[1618]: 2026-01-14 01:09:12.916 [INFO][4279] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" Namespace="calico-system" Pod="csi-node-driver-q8jc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--q8jc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q8jc6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba6f7f37-698f-4697-a408-a3efabbcf48e", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-q8jc6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibe500d7ebe1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:13.012089 containerd[1618]: 2026-01-14 01:09:12.917 [INFO][4279] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" Namespace="calico-system" Pod="csi-node-driver-q8jc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--q8jc6-eth0" Jan 14 01:09:13.012089 containerd[1618]: 2026-01-14 01:09:12.917 [INFO][4279] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe500d7ebe1 ContainerID="01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" Namespace="calico-system" Pod="csi-node-driver-q8jc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--q8jc6-eth0" Jan 14 01:09:13.012089 containerd[1618]: 2026-01-14 01:09:12.961 [INFO][4279] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" Namespace="calico-system" Pod="csi-node-driver-q8jc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--q8jc6-eth0" Jan 14 01:09:13.012089 containerd[1618]: 2026-01-14 01:09:12.962 [INFO][4279] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" Namespace="calico-system" Pod="csi-node-driver-q8jc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--q8jc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q8jc6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ba6f7f37-698f-4697-a408-a3efabbcf48e", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162", Pod:"csi-node-driver-q8jc6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibe500d7ebe1", MAC:"6a:f2:32:f5:2c:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:13.012089 containerd[1618]: 2026-01-14 01:09:13.002 [INFO][4279] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" Namespace="calico-system" Pod="csi-node-driver-q8jc6" WorkloadEndpoint="localhost-k8s-csi--node--driver--q8jc6-eth0" Jan 14 01:09:13.031573 containerd[1618]: time="2026-01-14T01:09:13.030982605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548b5c59b-jhbn9,Uid:0a01ad60-6870-4247-94ac-0665fa604563,Namespace:calico-system,Attempt:0,} returns sandbox id \"567d63496aa7037da271eb112c9b0e3f0b522d3bc7afec292aa2d4823dddb63b\"" Jan 14 01:09:13.032838 systemd[1]: Started cri-containerd-7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771.scope - libcontainer container 7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771. Jan 14 01:09:13.044872 containerd[1618]: time="2026-01-14T01:09:13.044711268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:09:13.073000 audit: BPF prog-id=199 op=LOAD Jan 14 01:09:13.075000 audit: BPF prog-id=200 op=LOAD Jan 14 01:09:13.075000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4565 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313464363738313862316537373433393939343235333661356437 Jan 14 01:09:13.075000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:09:13.075000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313464363738313862316537373433393939343235333661356437 Jan 14 01:09:13.076000 audit: BPF prog-id=201 op=LOAD Jan 14 01:09:13.076000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4565 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313464363738313862316537373433393939343235333661356437 Jan 14 01:09:13.077000 audit: BPF prog-id=202 op=LOAD Jan 14 01:09:13.077000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4565 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313464363738313862316537373433393939343235333661356437 Jan 14 01:09:13.078000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:09:13.078000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313464363738313862316537373433393939343235333661356437 Jan 14 01:09:13.079000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:09:13.079000 audit[4587]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313464363738313862316537373433393939343235333661356437 Jan 14 01:09:13.079000 audit: BPF prog-id=203 op=LOAD Jan 14 01:09:13.079000 audit[4587]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4565 pid=4587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733313464363738313862316537373433393939343235333661356437 Jan 14 01:09:13.081336 containerd[1618]: time="2026-01-14T01:09:13.078561708Z" level=info msg="StartContainer for \"a99874eb5d954b6d3a16ea6fad9cdb6a163000bd33aac30113b0991398e6cfd5\" returns successfully" Jan 14 01:09:13.083568 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:09:13.108203 systemd-networkd[1504]: calibd0dde9fe19: Gained IPv6LL Jan 14 01:09:13.119707 containerd[1618]: time="2026-01-14T01:09:13.119659297Z" level=info msg="connecting to shim 01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162" address="unix:///run/containerd/s/b6b9be2926f297ee05e5c482d50e8291769a8f0a84bb132460ab0f540de19eec" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:09:13.129411 containerd[1618]: time="2026-01-14T01:09:13.129376541Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:13.133667 containerd[1618]: time="2026-01-14T01:09:13.133487930Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:09:13.133667 containerd[1618]: time="2026-01-14T01:09:13.133589772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:13.136840 kubelet[2817]: E0114 01:09:13.133858 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:09:13.136840 kubelet[2817]: E0114 01:09:13.136395 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:09:13.136840 kubelet[2817]: E0114 01:09:13.136542 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c8b989ab0e84e66925d2b86c9e93775,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wxjn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548b5c59b-jhbn9_calico-system(0a01ad60-6870-4247-94ac-0665fa604563): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:13.140552 containerd[1618]: time="2026-01-14T01:09:13.140521766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:09:13.178446 containerd[1618]: time="2026-01-14T01:09:13.177676818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7hzf,Uid:578dbb0c-1de2-46fa-a95a-68290169880c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771\"" Jan 14 01:09:13.180102 kubelet[2817]: E0114 01:09:13.180061 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:13.186507 containerd[1618]: time="2026-01-14T01:09:13.186468722Z" level=info msg="CreateContainer within sandbox \"7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:09:13.229395 containerd[1618]: time="2026-01-14T01:09:13.228420710Z" level=info msg="Container a3d7ab0b80afaed53c77781d5111d3fa30998689abe47f4d5c6ecf4adcbc6e70: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:09:13.259548 containerd[1618]: time="2026-01-14T01:09:13.259416878Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:13.260978 containerd[1618]: time="2026-01-14T01:09:13.260834379Z" level=info msg="CreateContainer within sandbox \"7314d67818b1e774399942536a5d78092bf0317c280af6f6fc91998769fa5771\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3d7ab0b80afaed53c77781d5111d3fa30998689abe47f4d5c6ecf4adcbc6e70\"" Jan 14 01:09:13.264238 containerd[1618]: time="2026-01-14T01:09:13.264062269Z" level=info msg="StartContainer for \"a3d7ab0b80afaed53c77781d5111d3fa30998689abe47f4d5c6ecf4adcbc6e70\"" Jan 14 01:09:13.266842 containerd[1618]: time="2026-01-14T01:09:13.266714048Z" level=info msg="connecting to shim a3d7ab0b80afaed53c77781d5111d3fa30998689abe47f4d5c6ecf4adcbc6e70" address="unix:///run/containerd/s/8c87d95a332cdd1d8f8e5f439de19ea8f20de6481a5e1d72cb71f1a9dab59262" protocol=ttrpc version=3 Jan 14 01:09:13.272144 systemd[1]: Started cri-containerd-01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162.scope - libcontainer container 01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162. Jan 14 01:09:13.297520 containerd[1618]: time="2026-01-14T01:09:13.297363204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:09:13.297520 containerd[1618]: time="2026-01-14T01:09:13.297477139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:13.299529 kubelet[2817]: E0114 01:09:13.298267 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:09:13.299529 kubelet[2817]: E0114 01:09:13.298329 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:09:13.299529 kubelet[2817]: E0114 01:09:13.298454 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxjn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548b5c59b-jhbn9_calico-system(0a01ad60-6870-4247-94ac-0665fa604563): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:13.300422 kubelet[2817]: E0114 01:09:13.299850 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:09:13.338000 audit: BPF prog-id=204 op=LOAD Jan 14 01:09:13.340000 audit: BPF prog-id=205 op=LOAD Jan 14 01:09:13.340000 audit[4650]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001da238 a2=98 a3=0 items=0 ppid=4630 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333832343536656666343565616166653563633962343039393564 Jan 14 01:09:13.340000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:09:13.340000 audit[4650]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4630 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333832343536656666343565616166653563633962343039393564 Jan 14 01:09:13.342000 audit: BPF prog-id=206 op=LOAD Jan 14 01:09:13.342000 audit[4650]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001da488 a2=98 a3=0 items=0 ppid=4630 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333832343536656666343565616166653563633962343039393564 Jan 14 01:09:13.342000 audit: BPF prog-id=207 op=LOAD Jan 14 01:09:13.342000 audit[4650]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001da218 a2=98 a3=0 items=0 ppid=4630 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333832343536656666343565616166653563633962343039393564 Jan 14 01:09:13.342000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:09:13.342000 audit[4650]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4630 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333832343536656666343565616166653563633962343039393564 Jan 14 01:09:13.343000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:09:13.343000 audit[4650]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4630 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333832343536656666343565616166653563633962343039393564 Jan 14 01:09:13.343000 audit: BPF prog-id=208 op=LOAD Jan 14 01:09:13.343000 audit[4650]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001da6e8 a2=98 a3=0 items=0 ppid=4630 pid=4650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031333832343536656666343565616166653563633962343039393564 Jan 14 01:09:13.349337 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:09:13.351419 systemd[1]: Started cri-containerd-a3d7ab0b80afaed53c77781d5111d3fa30998689abe47f4d5c6ecf4adcbc6e70.scope - libcontainer container a3d7ab0b80afaed53c77781d5111d3fa30998689abe47f4d5c6ecf4adcbc6e70. Jan 14 01:09:13.402000 audit: BPF prog-id=209 op=LOAD Jan 14 01:09:13.405000 audit: BPF prog-id=210 op=LOAD Jan 14 01:09:13.405000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4565 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643761623062383061666165643533633737373831643531313164 Jan 14 01:09:13.405000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:09:13.405000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643761623062383061666165643533633737373831643531313164 Jan 14 01:09:13.406000 audit: BPF prog-id=211 op=LOAD Jan 14 01:09:13.406000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4565 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643761623062383061666165643533633737373831643531313164 Jan 14 01:09:13.406000 audit: BPF prog-id=212 op=LOAD Jan 14 01:09:13.406000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4565 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643761623062383061666165643533633737373831643531313164 Jan 14 01:09:13.406000 audit: BPF prog-id=212 op=UNLOAD Jan 14 01:09:13.406000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643761623062383061666165643533633737373831643531313164 Jan 14 01:09:13.406000 audit: BPF prog-id=211 op=UNLOAD Jan 14 01:09:13.406000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4565 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643761623062383061666165643533633737373831643531313164 Jan 14 01:09:13.407000 audit: BPF prog-id=213 op=LOAD Jan 14 01:09:13.407000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4565 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133643761623062383061666165643533633737373831643531313164 Jan 14 01:09:13.435774 containerd[1618]: time="2026-01-14T01:09:13.435568492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q8jc6,Uid:ba6f7f37-698f-4697-a408-a3efabbcf48e,Namespace:calico-system,Attempt:0,} returns sandbox id \"01382456eff45eaafe5cc9b40995dc28ed973e189bad8a9451444869e9dcf162\"" Jan 14 01:09:13.455069 containerd[1618]: time="2026-01-14T01:09:13.454975519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:09:13.492063 containerd[1618]: time="2026-01-14T01:09:13.490057625Z" level=info msg="StartContainer for \"a3d7ab0b80afaed53c77781d5111d3fa30998689abe47f4d5c6ecf4adcbc6e70\" returns successfully" Jan 14 01:09:13.532359 containerd[1618]: time="2026-01-14T01:09:13.532213624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:13.544221 containerd[1618]: time="2026-01-14T01:09:13.544025762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:09:13.544221 containerd[1618]: time="2026-01-14T01:09:13.544045038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:13.544497 kubelet[2817]: E0114 01:09:13.544453 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:09:13.545035 kubelet[2817]: E0114 01:09:13.545005 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:09:13.547129 kubelet[2817]: E0114 01:09:13.545647 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g5n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:13.574683 containerd[1618]: time="2026-01-14T01:09:13.574385072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:09:13.593000 audit: BPF prog-id=214 op=LOAD Jan 14 01:09:13.593000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb42a03e0 a2=94 a3=1 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.593000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.594000 audit: BPF prog-id=214 op=UNLOAD Jan 14 01:09:13.594000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeb42a03e0 a2=94 a3=1 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.594000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.615000 audit: BPF prog-id=215 op=LOAD Jan 14 01:09:13.615000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeb42a03d0 a2=94 a3=4 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.615000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.616000 audit: BPF prog-id=215 op=UNLOAD Jan 14 01:09:13.616000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffeb42a03d0 a2=0 a3=4 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.616000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.617000 audit: BPF prog-id=216 op=LOAD Jan 14 01:09:13.617000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb42a0230 a2=94 a3=5 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.617000 audit: BPF prog-id=216 op=UNLOAD Jan 14 01:09:13.617000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb42a0230 a2=0 a3=5 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.617000 audit: BPF prog-id=217 op=LOAD Jan 14 01:09:13.617000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeb42a0450 a2=94 a3=6 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.617000 audit: BPF prog-id=217 op=UNLOAD Jan 14 01:09:13.617000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffeb42a0450 a2=0 a3=6 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.617000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.618000 audit: BPF prog-id=218 op=LOAD Jan 14 01:09:13.618000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeb429fc00 a2=94 a3=88 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.618000 audit: BPF prog-id=219 op=LOAD Jan 14 01:09:13.618000 audit[4588]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffeb429fa80 a2=94 a3=2 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.618000 audit: BPF prog-id=219 op=UNLOAD Jan 14 01:09:13.618000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffeb429fab0 a2=0 a3=7ffeb429fbb0 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.618000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.619000 audit: BPF prog-id=218 op=UNLOAD Jan 14 01:09:13.619000 audit[4588]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2a5e8d10 a2=0 a3=974acca8b13aaf00 items=0 ppid=4194 pid=4588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.619000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:09:13.639000 audit: BPF prog-id=220 op=LOAD Jan 14 01:09:13.639000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd258d0ca0 a2=98 a3=1999999999999999 items=0 ppid=4194 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.639000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:09:13.639000 audit: BPF prog-id=220 op=UNLOAD Jan 14 01:09:13.639000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd258d0c70 a3=0 items=0 ppid=4194 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.639000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:09:13.639000 audit: BPF prog-id=221 op=LOAD Jan 14 01:09:13.639000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd258d0b80 a2=94 a3=ffff items=0 ppid=4194 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.639000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:09:13.640000 audit: BPF prog-id=221 op=UNLOAD Jan 14 01:09:13.640000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd258d0b80 a2=94 a3=ffff items=0 ppid=4194 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.640000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:09:13.640000 audit: BPF prog-id=222 op=LOAD Jan 14 01:09:13.640000 audit[4716]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd258d0bc0 a2=94 a3=7ffd258d0da0 items=0 ppid=4194 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.640000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:09:13.640000 audit: BPF prog-id=222 op=UNLOAD Jan 14 01:09:13.640000 audit[4716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd258d0bc0 a2=94 a3=7ffd258d0da0 items=0 ppid=4194 pid=4716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.640000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:09:13.650974 containerd[1618]: time="2026-01-14T01:09:13.649665008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:13.652778 containerd[1618]: time="2026-01-14T01:09:13.652743237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:13.653605 containerd[1618]: time="2026-01-14T01:09:13.653478667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:09:13.655352 kubelet[2817]: E0114 01:09:13.655023 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:09:13.655352 kubelet[2817]: E0114 01:09:13.655271 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:09:13.657064 kubelet[2817]: E0114 01:09:13.656577 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g5n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:13.659161 kubelet[2817]: E0114 01:09:13.659006 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:09:13.776126 systemd-networkd[1504]: vxlan.calico: Link UP Jan 14 01:09:13.776138 systemd-networkd[1504]: vxlan.calico: Gained carrier Jan 14 01:09:13.810392 kubelet[2817]: E0114 01:09:13.796559 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:13.810392 kubelet[2817]: E0114 01:09:13.809593 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:09:13.810392 kubelet[2817]: E0114 01:09:13.812049 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:13.816718 kubelet[2817]: E0114 01:09:13.816605 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:09:13.823966 kubelet[2817]: E0114 01:09:13.823838 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:09:13.857027 kubelet[2817]: I0114 01:09:13.855120 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d4h48" podStartSLOduration=61.855096327 podStartE2EDuration="1m1.855096327s" podCreationTimestamp="2026-01-14 01:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:09:13.851767508 +0000 UTC m=+66.279167391" watchObservedRunningTime="2026-01-14 01:09:13.855096327 +0000 UTC m=+66.282496181" Jan 14 01:09:13.897084 systemd-networkd[1504]: calie57204e558b: Gained IPv6LL Jan 14 01:09:13.904000 audit: BPF prog-id=223 op=LOAD Jan 14 01:09:13.904000 audit[4745]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7d0bf580 a2=98 a3=0 items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.904000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.906000 audit: BPF prog-id=223 op=UNLOAD Jan 14 01:09:13.906000 audit[4745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd7d0bf550 a3=0 items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.906000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.907000 audit: BPF prog-id=224 op=LOAD Jan 14 01:09:13.907000 audit[4745]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7d0bf390 a2=94 a3=54428f items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.907000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.908000 audit: BPF prog-id=224 op=UNLOAD Jan 14 01:09:13.908000 audit[4745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd7d0bf390 a2=94 a3=54428f items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.908000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.908000 audit: BPF prog-id=225 op=LOAD Jan 14 01:09:13.908000 audit[4745]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7d0bf3c0 a2=94 a3=2 items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.908000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.908000 audit: BPF prog-id=225 op=UNLOAD Jan 14 01:09:13.908000 audit[4745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd7d0bf3c0 a2=0 a3=2 items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.908000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.908000 audit: BPF prog-id=226 op=LOAD Jan 14 01:09:13.908000 audit[4745]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd7d0bf170 a2=94 a3=4 items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.908000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.908000 audit: BPF prog-id=226 op=UNLOAD Jan 14 01:09:13.908000 audit[4745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd7d0bf170 a2=94 a3=4 items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.908000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.908000 audit: BPF prog-id=227 op=LOAD Jan 14 01:09:13.908000 audit[4745]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd7d0bf270 a2=94 a3=7ffd7d0bf3f0 items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.908000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.909000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:09:13.909000 audit[4745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd7d0bf270 a2=0 a3=7ffd7d0bf3f0 items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.909000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.911000 audit: BPF prog-id=228 op=LOAD Jan 14 01:09:13.911000 audit[4745]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd7d0be9a0 a2=94 a3=2 items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.911000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.911000 audit: BPF prog-id=228 op=UNLOAD Jan 14 01:09:13.911000 audit[4745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd7d0be9a0 a2=0 a3=2 items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.911000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.911000 audit: BPF prog-id=229 op=LOAD Jan 14 01:09:13.911000 audit[4745]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd7d0beaa0 a2=94 a3=30 items=0 ppid=4194 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.911000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:09:13.946000 audit: BPF prog-id=230 op=LOAD Jan 14 01:09:13.946000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff61a96d0 a2=98 a3=0 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.946000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:13.946000 audit: BPF prog-id=230 op=UNLOAD Jan 14 01:09:13.946000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffff61a96a0 a3=0 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.946000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:13.947000 audit: BPF prog-id=231 op=LOAD Jan 14 01:09:13.947000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff61a94c0 a2=94 a3=54428f items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.947000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:13.947000 audit: BPF prog-id=231 op=UNLOAD Jan 14 01:09:13.947000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff61a94c0 a2=94 a3=54428f items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.947000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:13.947000 audit: BPF prog-id=232 op=LOAD Jan 14 01:09:13.947000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff61a94f0 a2=94 a3=2 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.947000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:13.947000 audit: BPF prog-id=232 op=UNLOAD Jan 14 01:09:13.947000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff61a94f0 a2=0 a3=2 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.947000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:13.970000 audit[4758]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:13.970000 audit[4758]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe8ec0e2d0 a2=0 a3=7ffe8ec0e2bc items=0 ppid=2971 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:13.980000 audit[4758]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:13.980000 audit[4758]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe8ec0e2d0 a2=0 a3=0 items=0 ppid=2971 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:13.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:14.000027 kubelet[2817]: I0114 01:09:13.999197 2817 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h7hzf" podStartSLOduration=61.999176583 podStartE2EDuration="1m1.999176583s" podCreationTimestamp="2026-01-14 01:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:09:13.913662974 +0000 UTC m=+66.341062859" watchObservedRunningTime="2026-01-14 01:09:13.999176583 +0000 UTC m=+66.426576436" Jan 14 01:09:14.163000 audit[4760]: NETFILTER_CFG table=filter:125 family=2 entries=17 op=nft_register_rule pid=4760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:14.163000 audit[4760]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd5045cea0 a2=0 a3=7ffd5045ce8c items=0 ppid=2971 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:14.191000 audit[4760]: NETFILTER_CFG table=nat:126 family=2 entries=47 op=nft_register_chain pid=4760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:14.191000 audit[4760]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd5045cea0 a2=0 a3=7ffd5045ce8c items=0 ppid=2971 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.191000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:14.291000 audit: BPF prog-id=233 op=LOAD Jan 14 01:09:14.291000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffff61a93b0 a2=94 a3=1 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.291000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.292000 audit: BPF prog-id=233 op=UNLOAD Jan 14 01:09:14.292000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffff61a93b0 a2=94 a3=1 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.304000 audit: BPF prog-id=234 op=LOAD Jan 14 01:09:14.304000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff61a93a0 a2=94 a3=4 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.304000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.304000 audit: BPF prog-id=234 op=UNLOAD Jan 14 01:09:14.304000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffff61a93a0 a2=0 a3=4 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.304000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.305000 audit: BPF prog-id=235 op=LOAD Jan 14 01:09:14.305000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff61a9200 a2=94 a3=5 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.305000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.305000 audit: BPF prog-id=235 op=UNLOAD Jan 14 01:09:14.305000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffff61a9200 a2=0 a3=5 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.305000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.305000 audit: BPF prog-id=236 op=LOAD Jan 14 01:09:14.305000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff61a9420 a2=94 a3=6 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.305000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.305000 audit: BPF prog-id=236 op=UNLOAD Jan 14 01:09:14.305000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffff61a9420 a2=0 a3=6 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.305000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.305000 audit: BPF prog-id=237 op=LOAD Jan 14 01:09:14.305000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffff61a8bd0 a2=94 a3=88 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.305000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.306000 audit: BPF prog-id=238 op=LOAD Jan 14 01:09:14.306000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffff61a8a50 a2=94 a3=2 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.306000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.306000 audit: BPF prog-id=238 op=UNLOAD Jan 14 01:09:14.306000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffff61a8a80 a2=0 a3=7ffff61a8b80 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.306000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.307000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:09:14.307000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3213cd10 a2=0 a3=3a00ce3d31605746 items=0 ppid=4194 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.307000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:09:14.331000 audit: BPF prog-id=229 op=UNLOAD Jan 14 01:09:14.331000 audit[4194]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000a0adc0 a2=0 a3=0 items=0 ppid=4185 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.331000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:09:14.433000 audit[4787]: NETFILTER_CFG table=nat:127 family=2 entries=15 op=nft_register_chain pid=4787 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:09:14.433000 audit[4787]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe3cee0790 a2=0 a3=7ffe3cee077c items=0 ppid=4194 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.433000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:09:14.443000 audit[4791]: NETFILTER_CFG table=mangle:128 family=2 entries=16 op=nft_register_chain pid=4791 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:09:14.443000 audit[4791]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff9fa803e0 a2=0 a3=7fff9fa803cc items=0 ppid=4194 pid=4791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.443000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:09:14.447000 audit[4786]: NETFILTER_CFG table=raw:129 family=2 entries=21 op=nft_register_chain pid=4786 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:09:14.447000 audit[4786]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe45a8dab0 a2=0 a3=7ffe45a8da9c items=0 ppid=4194 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.447000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:09:14.453277 systemd-networkd[1504]: cali3dbde375df7: Gained IPv6LL Jan 14 01:09:14.454463 systemd-networkd[1504]: calibe500d7ebe1: Gained IPv6LL Jan 14 01:09:14.451000 audit[4788]: NETFILTER_CFG table=filter:130 family=2 entries=228 op=nft_register_chain pid=4788 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:09:14.451000 audit[4788]: SYSCALL arch=c000003e syscall=46 success=yes exit=132672 a0=3 a1=7ffd2816d6b0 a2=0 a3=7ffd2816d69c items=0 ppid=4194 pid=4788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:14.451000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:09:14.819865 kubelet[2817]: E0114 01:09:14.819590 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:14.821653 kubelet[2817]: E0114 01:09:14.821568 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:14.828604 kubelet[2817]: E0114 01:09:14.828492 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:09:14.829233 kubelet[2817]: E0114 01:09:14.828648 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:09:15.413366 systemd-networkd[1504]: vxlan.calico: Gained IPv6LL Jan 14 01:09:15.823420 kubelet[2817]: E0114 01:09:15.823294 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:15.824231 kubelet[2817]: E0114 01:09:15.823541 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:18.732365 kubelet[2817]: E0114 01:09:18.732057 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:19.733650 containerd[1618]: time="2026-01-14T01:09:19.733433363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b64b7c788-2crrf,Uid:31330de6-3f41-4f44-bd94-776d84913764,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:09:20.215535 systemd-networkd[1504]: cali144e8c76e62: Link UP Jan 14 01:09:20.224311 systemd-networkd[1504]: cali144e8c76e62: Gained carrier Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:19.864 [INFO][4807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0 calico-apiserver-b64b7c788- calico-apiserver 31330de6-3f41-4f44-bd94-776d84913764 915 0 2026-01-14 01:08:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b64b7c788 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b64b7c788-2crrf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali144e8c76e62 [] [] }} ContainerID="e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-2crrf" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--2crrf-" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:19.864 [INFO][4807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-2crrf" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:19.934 [INFO][4822] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" HandleID="k8s-pod-network.e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" Workload="localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:19.934 [INFO][4822] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" HandleID="k8s-pod-network.e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" Workload="localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b64b7c788-2crrf", "timestamp":"2026-01-14 01:09:19.934202421 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:19.934 [INFO][4822] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:19.934 [INFO][4822] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:19.934 [INFO][4822] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:19.962 [INFO][4822] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" host="localhost" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:20.016 [INFO][4822] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:20.098 [INFO][4822] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:20.112 [INFO][4822] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:20.120 [INFO][4822] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:20.120 [INFO][4822] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" host="localhost" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:20.130 [INFO][4822] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:20.161 [INFO][4822] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" host="localhost" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:20.181 [INFO][4822] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" host="localhost" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:20.181 [INFO][4822] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" host="localhost" Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:20.181 [INFO][4822] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:09:20.254957 containerd[1618]: 2026-01-14 01:09:20.181 [INFO][4822] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" HandleID="k8s-pod-network.e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" Workload="localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0" Jan 14 01:09:20.256122 containerd[1618]: 2026-01-14 01:09:20.205 [INFO][4807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-2crrf" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0", GenerateName:"calico-apiserver-b64b7c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"31330de6-3f41-4f44-bd94-776d84913764", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b64b7c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b64b7c788-2crrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali144e8c76e62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:20.256122 containerd[1618]: 2026-01-14 01:09:20.207 [INFO][4807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-2crrf" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0" Jan 14 01:09:20.256122 containerd[1618]: 2026-01-14 01:09:20.207 [INFO][4807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali144e8c76e62 ContainerID="e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-2crrf" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0" Jan 14 01:09:20.256122 containerd[1618]: 2026-01-14 01:09:20.222 [INFO][4807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-2crrf" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0" Jan 14 01:09:20.256122 containerd[1618]: 2026-01-14 01:09:20.226 [INFO][4807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-2crrf" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0", GenerateName:"calico-apiserver-b64b7c788-", Namespace:"calico-apiserver", SelfLink:"", UID:"31330de6-3f41-4f44-bd94-776d84913764", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b64b7c788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b", Pod:"calico-apiserver-b64b7c788-2crrf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali144e8c76e62", MAC:"32:6a:04:88:33:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:20.256122 containerd[1618]: 2026-01-14 01:09:20.249 [INFO][4807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" Namespace="calico-apiserver" Pod="calico-apiserver-b64b7c788-2crrf" WorkloadEndpoint="localhost-k8s-calico--apiserver--b64b7c788--2crrf-eth0" Jan 14 01:09:20.287000 audit[4840]: NETFILTER_CFG table=filter:131 family=2 entries=49 op=nft_register_chain pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:09:20.292357 kernel: kauditd_printk_skb: 369 callbacks suppressed Jan 14 01:09:20.292433 kernel: audit: type=1325 audit(1768352960.287:702): table=filter:131 family=2 entries=49 op=nft_register_chain pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:09:20.287000 audit[4840]: SYSCALL arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7ffef5f223a0 a2=0 a3=7ffef5f2238c items=0 ppid=4194 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.323284 kernel: audit: type=1300 audit(1768352960.287:702): arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7ffef5f223a0 a2=0 a3=7ffef5f2238c items=0 ppid=4194 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.324820 kernel: audit: type=1327 audit(1768352960.287:702): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:09:20.287000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:09:20.325051 containerd[1618]: time="2026-01-14T01:09:20.324215761Z" level=info msg="connecting to shim e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b" address="unix:///run/containerd/s/8ef9eaa38ca364321c7c553f0e9bbb51af4d98f3b96aabf1b37aa2a7fdc5b20c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:09:20.375726 systemd[1]: Started cri-containerd-e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b.scope - libcontainer container e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b. Jan 14 01:09:20.405038 kernel: audit: type=1334 audit(1768352960.399:703): prog-id=239 op=LOAD Jan 14 01:09:20.399000 audit: BPF prog-id=239 op=LOAD Jan 14 01:09:20.403000 audit: BPF prog-id=240 op=LOAD Jan 14 01:09:20.403000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4848 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.413045 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:09:20.422025 kernel: audit: type=1334 audit(1768352960.403:704): prog-id=240 op=LOAD Jan 14 01:09:20.422094 kernel: audit: type=1300 audit(1768352960.403:704): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4848 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663239386638373162623330623630343331353336663064613630 Jan 14 01:09:20.439087 kernel: audit: type=1327 audit(1768352960.403:704): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663239386638373162623330623630343331353336663064613630 Jan 14 01:09:20.439221 kernel: audit: type=1334 audit(1768352960.403:705): prog-id=240 op=UNLOAD Jan 14 01:09:20.403000 audit: BPF prog-id=240 op=UNLOAD Jan 14 01:09:20.403000 audit[4859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4848 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.459014 kernel: audit: type=1300 audit(1768352960.403:705): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4848 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.459097 kernel: audit: type=1327 audit(1768352960.403:705): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663239386638373162623330623630343331353336663064613630 Jan 14 01:09:20.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663239386638373162623330623630343331353336663064613630 Jan 14 01:09:20.403000 audit: BPF prog-id=241 op=LOAD Jan 14 01:09:20.403000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4848 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663239386638373162623330623630343331353336663064613630 Jan 14 01:09:20.403000 audit: BPF prog-id=242 op=LOAD Jan 14 01:09:20.403000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4848 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663239386638373162623330623630343331353336663064613630 Jan 14 01:09:20.403000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:09:20.403000 audit[4859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4848 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663239386638373162623330623630343331353336663064613630 Jan 14 01:09:20.403000 audit: BPF prog-id=241 op=UNLOAD Jan 14 01:09:20.403000 audit[4859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4848 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663239386638373162623330623630343331353336663064613630 Jan 14 01:09:20.403000 audit: BPF prog-id=243 op=LOAD Jan 14 01:09:20.403000 audit[4859]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4848 pid=4859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663239386638373162623330623630343331353336663064613630 Jan 14 01:09:20.522783 containerd[1618]: time="2026-01-14T01:09:20.522650895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b64b7c788-2crrf,Uid:31330de6-3f41-4f44-bd94-776d84913764,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e0f298f871bb30b60431536f0da60b1318416a89cab7492998a66880a30bce1b\"" Jan 14 01:09:20.525020 containerd[1618]: time="2026-01-14T01:09:20.524995228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:09:20.603352 containerd[1618]: time="2026-01-14T01:09:20.603095399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:20.607703 containerd[1618]: time="2026-01-14T01:09:20.607545534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:09:20.607703 containerd[1618]: time="2026-01-14T01:09:20.607660469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:20.608129 kubelet[2817]: E0114 01:09:20.608048 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:09:20.608129 kubelet[2817]: E0114 01:09:20.608104 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:09:20.608533 kubelet[2817]: E0114 01:09:20.608243 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj2ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b64b7c788-2crrf_calico-apiserver(31330de6-3f41-4f44-bd94-776d84913764): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:20.610335 kubelet[2817]: E0114 01:09:20.609808 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:09:20.734770 containerd[1618]: time="2026-01-14T01:09:20.734527599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fd4f6b7c-sb5qb,Uid:f73e61d7-350a-471c-9476-00cd84fadf64,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:20.862377 kubelet[2817]: E0114 01:09:20.862274 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:09:20.961000 audit[4909]: NETFILTER_CFG table=filter:132 family=2 entries=14 op=nft_register_rule pid=4909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:20.961000 audit[4909]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffea83619d0 a2=0 a3=7ffea83619bc items=0 ppid=2971 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:20.980000 audit[4909]: NETFILTER_CFG table=nat:133 family=2 entries=20 op=nft_register_rule pid=4909 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:20.980000 audit[4909]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffea83619d0 a2=0 a3=7ffea83619bc items=0 ppid=2971 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:20.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:21.067588 systemd-networkd[1504]: cali6fac2316388: Link UP Jan 14 01:09:21.070732 systemd-networkd[1504]: cali6fac2316388: Gained carrier Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:20.853 [INFO][4885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0 calico-kube-controllers-77fd4f6b7c- calico-system f73e61d7-350a-471c-9476-00cd84fadf64 908 0 2026-01-14 01:08:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77fd4f6b7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-77fd4f6b7c-sb5qb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6fac2316388 [] [] }} ContainerID="49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" Namespace="calico-system" Pod="calico-kube-controllers-77fd4f6b7c-sb5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:20.854 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" Namespace="calico-system" Pod="calico-kube-controllers-77fd4f6b7c-sb5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:20.950 [INFO][4900] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" HandleID="k8s-pod-network.49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" Workload="localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:20.950 [INFO][4900] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" HandleID="k8s-pod-network.49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" Workload="localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000517db0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-77fd4f6b7c-sb5qb", "timestamp":"2026-01-14 01:09:20.950458164 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:20.950 [INFO][4900] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:20.950 [INFO][4900] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:20.950 [INFO][4900] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:20.970 [INFO][4900] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" host="localhost" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:20.983 [INFO][4900] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:20.996 [INFO][4900] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:21.000 [INFO][4900] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:21.007 [INFO][4900] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:21.008 [INFO][4900] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" host="localhost" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:21.011 [INFO][4900] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:21.026 [INFO][4900] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" host="localhost" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:21.051 [INFO][4900] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" host="localhost" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:21.051 [INFO][4900] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" host="localhost" Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:21.051 [INFO][4900] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:09:21.104114 containerd[1618]: 2026-01-14 01:09:21.051 [INFO][4900] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" HandleID="k8s-pod-network.49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" Workload="localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0" Jan 14 01:09:21.105501 containerd[1618]: 2026-01-14 01:09:21.057 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" Namespace="calico-system" Pod="calico-kube-controllers-77fd4f6b7c-sb5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0", GenerateName:"calico-kube-controllers-77fd4f6b7c-", Namespace:"calico-system", SelfLink:"", UID:"f73e61d7-350a-471c-9476-00cd84fadf64", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77fd4f6b7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-77fd4f6b7c-sb5qb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6fac2316388", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:21.105501 containerd[1618]: 2026-01-14 01:09:21.057 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" Namespace="calico-system" Pod="calico-kube-controllers-77fd4f6b7c-sb5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0" Jan 14 01:09:21.105501 containerd[1618]: 2026-01-14 01:09:21.057 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fac2316388 ContainerID="49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" Namespace="calico-system" Pod="calico-kube-controllers-77fd4f6b7c-sb5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0" Jan 14 01:09:21.105501 containerd[1618]: 2026-01-14 01:09:21.071 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" Namespace="calico-system" Pod="calico-kube-controllers-77fd4f6b7c-sb5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0" Jan 14 01:09:21.105501 containerd[1618]: 2026-01-14 01:09:21.074 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" Namespace="calico-system" Pod="calico-kube-controllers-77fd4f6b7c-sb5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0", GenerateName:"calico-kube-controllers-77fd4f6b7c-", Namespace:"calico-system", SelfLink:"", UID:"f73e61d7-350a-471c-9476-00cd84fadf64", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77fd4f6b7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d", Pod:"calico-kube-controllers-77fd4f6b7c-sb5qb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6fac2316388", MAC:"62:b4:89:2f:ca:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:21.105501 containerd[1618]: 2026-01-14 01:09:21.098 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" Namespace="calico-system" Pod="calico-kube-controllers-77fd4f6b7c-sb5qb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fd4f6b7c--sb5qb-eth0" Jan 14 01:09:21.137000 audit[4919]: NETFILTER_CFG table=filter:134 family=2 entries=52 op=nft_register_chain pid=4919 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:09:21.137000 audit[4919]: SYSCALL arch=c000003e syscall=46 success=yes exit=24312 a0=3 a1=7fff539ed0c0 a2=0 a3=7fff539ed0ac items=0 ppid=4194 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:21.137000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:09:21.190463 containerd[1618]: time="2026-01-14T01:09:21.190342748Z" level=info msg="connecting to shim 49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d" address="unix:///run/containerd/s/02acf8d1af9ce5add8fe7d8abdb0d9d2961648704c44c62659c94ac64dd89952" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:09:21.276463 systemd[1]: Started cri-containerd-49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d.scope - libcontainer container 49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d. Jan 14 01:09:21.307000 audit: BPF prog-id=244 op=LOAD Jan 14 01:09:21.309000 audit: BPF prog-id=245 op=LOAD Jan 14 01:09:21.309000 audit[4939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4928 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:21.309000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439623363663465653635313161646330613538616633613330383464 Jan 14 01:09:21.310000 audit: BPF prog-id=245 op=UNLOAD Jan 14 01:09:21.310000 audit[4939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4928 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:21.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439623363663465653635313161646330613538616633613330383464 Jan 14 01:09:21.311000 audit: BPF prog-id=246 op=LOAD Jan 14 01:09:21.311000 audit[4939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4928 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:21.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439623363663465653635313161646330613538616633613330383464 Jan 14 01:09:21.311000 audit: BPF prog-id=247 op=LOAD Jan 14 01:09:21.311000 audit[4939]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4928 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:21.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439623363663465653635313161646330613538616633613330383464 Jan 14 01:09:21.311000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:09:21.311000 audit[4939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4928 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:21.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439623363663465653635313161646330613538616633613330383464 Jan 14 01:09:21.311000 audit: BPF prog-id=246 op=UNLOAD Jan 14 01:09:21.311000 audit[4939]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4928 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:21.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439623363663465653635313161646330613538616633613330383464 Jan 14 01:09:21.311000 audit: BPF prog-id=248 op=LOAD Jan 14 01:09:21.311000 audit[4939]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4928 pid=4939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:21.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439623363663465653635313161646330613538616633613330383464 Jan 14 01:09:21.316158 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:09:21.407525 containerd[1618]: time="2026-01-14T01:09:21.407382653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fd4f6b7c-sb5qb,Uid:f73e61d7-350a-471c-9476-00cd84fadf64,Namespace:calico-system,Attempt:0,} returns sandbox id \"49b3cf4ee6511adc0a58af3a3084df39bdea8f3ba0ad7b144ed4fc706ac6268d\"" Jan 14 01:09:21.411831 containerd[1618]: time="2026-01-14T01:09:21.411634086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:09:21.486062 containerd[1618]: time="2026-01-14T01:09:21.483841208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:21.489810 containerd[1618]: time="2026-01-14T01:09:21.489704567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:09:21.490472 containerd[1618]: time="2026-01-14T01:09:21.490427413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:21.491795 kubelet[2817]: E0114 01:09:21.491409 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:09:21.491795 kubelet[2817]: E0114 01:09:21.491515 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:09:21.491795 kubelet[2817]: E0114 01:09:21.491708 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77fd4f6b7c-sb5qb_calico-system(f73e61d7-350a-471c-9476-00cd84fadf64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:21.493076 systemd-networkd[1504]: cali144e8c76e62: Gained IPv6LL Jan 14 01:09:21.496719 kubelet[2817]: E0114 01:09:21.494766 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:09:21.876437 kubelet[2817]: E0114 01:09:21.875164 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:09:21.876437 kubelet[2817]: E0114 01:09:21.875380 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:09:22.518529 systemd-networkd[1504]: cali6fac2316388: Gained IPv6LL Jan 14 01:09:22.732629 containerd[1618]: time="2026-01-14T01:09:22.732522757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8zxlx,Uid:f6cd6e89-0e6a-47aa-ae56-5f40f27190c0,Namespace:calico-system,Attempt:0,}" Jan 14 01:09:22.879439 kubelet[2817]: E0114 01:09:22.878336 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:09:23.107733 systemd-networkd[1504]: cali7a4d3c0e3a2: Link UP Jan 14 01:09:23.114814 systemd-networkd[1504]: cali7a4d3c0e3a2: Gained carrier Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:22.836 [INFO][4963] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--8zxlx-eth0 goldmane-666569f655- calico-system f6cd6e89-0e6a-47aa-ae56-5f40f27190c0 905 0 2026-01-14 01:08:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-8zxlx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7a4d3c0e3a2 [] [] }} ContainerID="12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" Namespace="calico-system" Pod="goldmane-666569f655-8zxlx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zxlx-" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:22.837 [INFO][4963] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" Namespace="calico-system" Pod="goldmane-666569f655-8zxlx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zxlx-eth0" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:22.918 [INFO][4977] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" HandleID="k8s-pod-network.12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" Workload="localhost-k8s-goldmane--666569f655--8zxlx-eth0" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:22.919 [INFO][4977] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" HandleID="k8s-pod-network.12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" Workload="localhost-k8s-goldmane--666569f655--8zxlx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003259e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-8zxlx", "timestamp":"2026-01-14 01:09:22.918397192 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:22.919 [INFO][4977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:22.919 [INFO][4977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:22.919 [INFO][4977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:22.953 [INFO][4977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" host="localhost" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:22.974 [INFO][4977] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:23.009 [INFO][4977] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:23.013 [INFO][4977] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:23.025 [INFO][4977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:23.025 [INFO][4977] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" host="localhost" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:23.030 [INFO][4977] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:23.058 [INFO][4977] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" host="localhost" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:23.091 [INFO][4977] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" host="localhost" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:23.091 [INFO][4977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" host="localhost" Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:23.091 [INFO][4977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:09:23.161036 containerd[1618]: 2026-01-14 01:09:23.091 [INFO][4977] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" HandleID="k8s-pod-network.12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" Workload="localhost-k8s-goldmane--666569f655--8zxlx-eth0" Jan 14 01:09:23.162706 containerd[1618]: 2026-01-14 01:09:23.100 [INFO][4963] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" Namespace="calico-system" Pod="goldmane-666569f655-8zxlx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zxlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--8zxlx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f6cd6e89-0e6a-47aa-ae56-5f40f27190c0", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-8zxlx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a4d3c0e3a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:23.162706 containerd[1618]: 2026-01-14 01:09:23.100 [INFO][4963] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" Namespace="calico-system" Pod="goldmane-666569f655-8zxlx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zxlx-eth0" Jan 14 01:09:23.162706 containerd[1618]: 2026-01-14 01:09:23.100 [INFO][4963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a4d3c0e3a2 ContainerID="12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" Namespace="calico-system" Pod="goldmane-666569f655-8zxlx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zxlx-eth0" Jan 14 01:09:23.162706 containerd[1618]: 2026-01-14 01:09:23.116 [INFO][4963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" Namespace="calico-system" Pod="goldmane-666569f655-8zxlx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zxlx-eth0" Jan 14 01:09:23.162706 containerd[1618]: 2026-01-14 01:09:23.117 [INFO][4963] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" Namespace="calico-system" Pod="goldmane-666569f655-8zxlx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zxlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--8zxlx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f6cd6e89-0e6a-47aa-ae56-5f40f27190c0", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 8, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca", Pod:"goldmane-666569f655-8zxlx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a4d3c0e3a2", MAC:"fa:5a:89:4c:53:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:09:23.162706 containerd[1618]: 2026-01-14 01:09:23.151 [INFO][4963] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" Namespace="calico-system" Pod="goldmane-666569f655-8zxlx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--8zxlx-eth0" Jan 14 01:09:23.186000 audit[4996]: NETFILTER_CFG table=filter:135 family=2 entries=70 op=nft_register_chain pid=4996 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:09:23.186000 audit[4996]: SYSCALL arch=c000003e syscall=46 success=yes exit=33956 a0=3 a1=7fff3e165880 a2=0 a3=7fff3e16586c items=0 ppid=4194 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.186000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:09:23.238825 containerd[1618]: time="2026-01-14T01:09:23.238773522Z" level=info msg="connecting to shim 12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca" address="unix:///run/containerd/s/a870611c647c0690bfead66c4a6c32e102d809ee52b03aee3827e98ed1e0ab8a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:09:23.305717 systemd[1]: Started cri-containerd-12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca.scope - libcontainer container 12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca. Jan 14 01:09:23.354000 audit: BPF prog-id=249 op=LOAD Jan 14 01:09:23.356000 audit: BPF prog-id=250 op=LOAD Jan 14 01:09:23.356000 audit[5017]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5005 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132656138653832303730643763393135386637663962353835333166 Jan 14 01:09:23.356000 audit: BPF prog-id=250 op=UNLOAD Jan 14 01:09:23.356000 audit[5017]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5005 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132656138653832303730643763393135386637663962353835333166 Jan 14 01:09:23.356000 audit: BPF prog-id=251 op=LOAD Jan 14 01:09:23.356000 audit[5017]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5005 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132656138653832303730643763393135386637663962353835333166 Jan 14 01:09:23.357000 audit: BPF prog-id=252 op=LOAD Jan 14 01:09:23.357000 audit[5017]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5005 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132656138653832303730643763393135386637663962353835333166 Jan 14 01:09:23.357000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:09:23.357000 audit[5017]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5005 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132656138653832303730643763393135386637663962353835333166 Jan 14 01:09:23.357000 audit: BPF prog-id=251 op=UNLOAD Jan 14 01:09:23.357000 audit[5017]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5005 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132656138653832303730643763393135386637663962353835333166 Jan 14 01:09:23.358000 audit: BPF prog-id=253 op=LOAD Jan 14 01:09:23.358000 audit[5017]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5005 pid=5017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132656138653832303730643763393135386637663962353835333166 Jan 14 01:09:23.362638 systemd-resolved[1297]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 14 01:09:23.457875 containerd[1618]: time="2026-01-14T01:09:23.457525820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8zxlx,Uid:f6cd6e89-0e6a-47aa-ae56-5f40f27190c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"12ea8e82070d7c9158f7f9b58531f9038f29f0d69a0fd09544e55ded898284ca\"" Jan 14 01:09:23.466504 containerd[1618]: time="2026-01-14T01:09:23.466080741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:09:23.547357 containerd[1618]: time="2026-01-14T01:09:23.547297773Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:23.553581 containerd[1618]: time="2026-01-14T01:09:23.553442706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:09:23.553581 containerd[1618]: time="2026-01-14T01:09:23.553557642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:23.554382 kubelet[2817]: E0114 01:09:23.554339 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:09:23.555385 kubelet[2817]: E0114 01:09:23.554535 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:09:23.555385 kubelet[2817]: E0114 01:09:23.554691 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zmvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8zxlx_calico-system(f6cd6e89-0e6a-47aa-ae56-5f40f27190c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:23.557230 kubelet[2817]: E0114 01:09:23.557136 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:09:23.891761 kubelet[2817]: E0114 01:09:23.887652 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:09:23.966000 audit[5042]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:23.966000 audit[5042]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd39ee79e0 a2=0 a3=7ffd39ee79cc items=0 ppid=2971 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:23.976000 audit[5042]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=5042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:09:23.976000 audit[5042]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd39ee79e0 a2=0 a3=7ffd39ee79cc items=0 ppid=2971 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:09:23.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:09:24.885215 systemd-networkd[1504]: cali7a4d3c0e3a2: Gained IPv6LL Jan 14 01:09:24.888264 kubelet[2817]: E0114 01:09:24.887821 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:09:25.735993 containerd[1618]: time="2026-01-14T01:09:25.735834819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:09:25.823316 containerd[1618]: time="2026-01-14T01:09:25.822235305Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:25.827102 containerd[1618]: time="2026-01-14T01:09:25.826616279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:09:25.827102 containerd[1618]: time="2026-01-14T01:09:25.826705315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:25.827233 kubelet[2817]: E0114 01:09:25.827118 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:09:25.827233 kubelet[2817]: E0114 01:09:25.827216 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:09:25.827683 kubelet[2817]: E0114 01:09:25.827466 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c8b989ab0e84e66925d2b86c9e93775,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wxjn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548b5c59b-jhbn9_calico-system(0a01ad60-6870-4247-94ac-0665fa604563): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:25.832568 containerd[1618]: time="2026-01-14T01:09:25.831158804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:09:25.926520 containerd[1618]: time="2026-01-14T01:09:25.926102850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:25.930415 containerd[1618]: time="2026-01-14T01:09:25.930315658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:09:25.930641 containerd[1618]: time="2026-01-14T01:09:25.930530511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:25.931053 kubelet[2817]: E0114 01:09:25.930784 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:09:25.931053 kubelet[2817]: E0114 01:09:25.930875 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:09:25.932203 kubelet[2817]: E0114 01:09:25.932075 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxjn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548b5c59b-jhbn9_calico-system(0a01ad60-6870-4247-94ac-0665fa604563): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:25.935576 kubelet[2817]: E0114 01:09:25.934610 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:09:28.736738 containerd[1618]: time="2026-01-14T01:09:28.736463382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:09:28.853624 containerd[1618]: time="2026-01-14T01:09:28.853410863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:28.857296 containerd[1618]: time="2026-01-14T01:09:28.857214375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:09:28.857766 kubelet[2817]: E0114 01:09:28.857680 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:09:28.857766 kubelet[2817]: E0114 01:09:28.857750 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:09:28.861132 kubelet[2817]: E0114 01:09:28.858099 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8x65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b64b7c788-f89f6_calico-apiserver(b1255f7d-606b-4b44-9160-25a609a72f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:28.865567 kubelet[2817]: E0114 01:09:28.865368 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:09:28.887615 containerd[1618]: time="2026-01-14T01:09:28.857289209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:29.742552 containerd[1618]: time="2026-01-14T01:09:29.742470331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:09:29.838362 containerd[1618]: time="2026-01-14T01:09:29.838290033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:29.844808 containerd[1618]: time="2026-01-14T01:09:29.843722486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:09:29.844808 containerd[1618]: time="2026-01-14T01:09:29.843855535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:29.845359 kubelet[2817]: E0114 01:09:29.844132 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:09:29.845359 kubelet[2817]: E0114 01:09:29.844185 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:09:29.845359 kubelet[2817]: E0114 01:09:29.844309 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g5n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:29.851127 containerd[1618]: time="2026-01-14T01:09:29.850592002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:09:29.946168 containerd[1618]: time="2026-01-14T01:09:29.945332653Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:29.948224 containerd[1618]: time="2026-01-14T01:09:29.948185400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:09:29.948320 containerd[1618]: time="2026-01-14T01:09:29.948280879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:29.949260 kubelet[2817]: E0114 01:09:29.948785 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:09:29.949260 kubelet[2817]: E0114 01:09:29.948853 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:09:29.951172 kubelet[2817]: E0114 01:09:29.950859 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g5n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:29.953610 kubelet[2817]: E0114 01:09:29.953400 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:09:34.733580 containerd[1618]: time="2026-01-14T01:09:34.733529188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:09:34.795640 containerd[1618]: time="2026-01-14T01:09:34.795494865Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:34.803524 containerd[1618]: time="2026-01-14T01:09:34.803133024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:09:34.803524 containerd[1618]: time="2026-01-14T01:09:34.803282443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:34.805641 kubelet[2817]: E0114 01:09:34.804106 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:09:34.805641 kubelet[2817]: E0114 01:09:34.804853 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:09:34.806520 kubelet[2817]: E0114 01:09:34.806372 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj2ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b64b7c788-2crrf_calico-apiserver(31330de6-3f41-4f44-bd94-776d84913764): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:34.807984 kubelet[2817]: E0114 01:09:34.807805 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:09:38.735802 containerd[1618]: time="2026-01-14T01:09:38.735737155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:09:38.806168 containerd[1618]: time="2026-01-14T01:09:38.804750478Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:38.806835 containerd[1618]: time="2026-01-14T01:09:38.806712293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:09:38.806977 containerd[1618]: time="2026-01-14T01:09:38.806839871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:38.807197 kubelet[2817]: E0114 01:09:38.807085 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:09:38.807652 kubelet[2817]: E0114 01:09:38.807204 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:09:38.807652 kubelet[2817]: E0114 01:09:38.807343 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77fd4f6b7c-sb5qb_calico-system(f73e61d7-350a-471c-9476-00cd84fadf64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:38.809432 kubelet[2817]: E0114 01:09:38.809247 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:09:39.732347 kubelet[2817]: E0114 01:09:39.732172 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:39.736684 containerd[1618]: time="2026-01-14T01:09:39.736649142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:09:39.827928 containerd[1618]: time="2026-01-14T01:09:39.825346676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:39.827928 containerd[1618]: time="2026-01-14T01:09:39.827288061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:09:39.827928 containerd[1618]: time="2026-01-14T01:09:39.827374734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:39.828158 kubelet[2817]: E0114 01:09:39.827835 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:09:39.828158 kubelet[2817]: E0114 01:09:39.827976 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:09:39.829285 kubelet[2817]: E0114 01:09:39.828639 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zmvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8zxlx_calico-system(f6cd6e89-0e6a-47aa-ae56-5f40f27190c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:39.830433 kubelet[2817]: E0114 01:09:39.830358 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:09:40.738299 kubelet[2817]: E0114 01:09:40.734851 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:40.739308 kubelet[2817]: E0114 01:09:40.739226 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:09:41.961204 kubelet[2817]: E0114 01:09:41.960602 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:43.744246 kubelet[2817]: E0114 01:09:43.738407 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:09:43.745133 kubelet[2817]: E0114 01:09:43.744789 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:09:44.735018 kubelet[2817]: E0114 01:09:44.734627 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:09:48.734650 kubelet[2817]: E0114 01:09:48.734478 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:09:49.738963 kubelet[2817]: E0114 01:09:49.738100 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:09:51.739641 containerd[1618]: time="2026-01-14T01:09:51.739160948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:09:51.848378 containerd[1618]: time="2026-01-14T01:09:51.848318940Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:51.852591 containerd[1618]: time="2026-01-14T01:09:51.852405671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:09:51.852591 containerd[1618]: time="2026-01-14T01:09:51.852515006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:51.853837 kubelet[2817]: E0114 01:09:51.853006 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:09:51.853837 kubelet[2817]: E0114 01:09:51.853069 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:09:51.853837 kubelet[2817]: E0114 01:09:51.853192 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c8b989ab0e84e66925d2b86c9e93775,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wxjn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548b5c59b-jhbn9_calico-system(0a01ad60-6870-4247-94ac-0665fa604563): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:51.857675 containerd[1618]: time="2026-01-14T01:09:51.857651599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:09:51.962809 containerd[1618]: time="2026-01-14T01:09:51.962627086Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:51.965040 containerd[1618]: time="2026-01-14T01:09:51.964877343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:09:51.965205 containerd[1618]: time="2026-01-14T01:09:51.965049004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:51.966027 kubelet[2817]: E0114 01:09:51.965338 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:09:51.966027 kubelet[2817]: E0114 01:09:51.965397 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:09:51.966027 kubelet[2817]: E0114 01:09:51.965524 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxjn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548b5c59b-jhbn9_calico-system(0a01ad60-6870-4247-94ac-0665fa604563): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:51.969384 kubelet[2817]: E0114 01:09:51.969119 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:09:52.734657 kubelet[2817]: E0114 01:09:52.734384 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:09:54.737739 containerd[1618]: time="2026-01-14T01:09:54.736624962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:09:54.821789 containerd[1618]: time="2026-01-14T01:09:54.821540346Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:54.825633 containerd[1618]: time="2026-01-14T01:09:54.825506822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:09:54.825633 containerd[1618]: time="2026-01-14T01:09:54.825600196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:54.826864 kubelet[2817]: E0114 01:09:54.826818 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:09:54.827491 kubelet[2817]: E0114 01:09:54.826881 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:09:54.827491 kubelet[2817]: E0114 01:09:54.827156 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8x65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b64b7c788-f89f6_calico-apiserver(b1255f7d-606b-4b44-9160-25a609a72f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:54.829486 kubelet[2817]: E0114 01:09:54.829439 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:09:59.740157 containerd[1618]: time="2026-01-14T01:09:59.740108062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:09:59.855956 containerd[1618]: time="2026-01-14T01:09:59.854581750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:59.864101 containerd[1618]: time="2026-01-14T01:09:59.864053782Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:09:59.865127 containerd[1618]: time="2026-01-14T01:09:59.864326633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:59.868673 kubelet[2817]: E0114 01:09:59.866436 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:09:59.868673 kubelet[2817]: E0114 01:09:59.866502 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:09:59.868673 kubelet[2817]: E0114 01:09:59.866627 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g5n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:59.874627 containerd[1618]: time="2026-01-14T01:09:59.873150663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:09:59.961457 containerd[1618]: time="2026-01-14T01:09:59.960181675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:09:59.964431 containerd[1618]: time="2026-01-14T01:09:59.963540119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:09:59.964431 containerd[1618]: time="2026-01-14T01:09:59.963662077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:09:59.964712 kubelet[2817]: E0114 01:09:59.964603 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:09:59.964712 kubelet[2817]: E0114 01:09:59.964700 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:09:59.965243 kubelet[2817]: E0114 01:09:59.964842 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g5n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:09:59.967506 kubelet[2817]: E0114 01:09:59.966710 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:10:00.742239 containerd[1618]: time="2026-01-14T01:10:00.742094301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:10:00.850962 containerd[1618]: time="2026-01-14T01:10:00.850767645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:00.858525 containerd[1618]: time="2026-01-14T01:10:00.858482907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:00.858930 containerd[1618]: time="2026-01-14T01:10:00.858713929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:10:00.860838 kubelet[2817]: E0114 01:10:00.860376 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:10:00.860838 kubelet[2817]: E0114 01:10:00.860431 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:10:00.860838 kubelet[2817]: E0114 01:10:00.860560 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77fd4f6b7c-sb5qb_calico-system(f73e61d7-350a-471c-9476-00cd84fadf64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:00.861872 kubelet[2817]: E0114 01:10:00.861768 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:10:01.746130 containerd[1618]: time="2026-01-14T01:10:01.744041946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:10:01.872967 containerd[1618]: time="2026-01-14T01:10:01.872829078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:01.890138 containerd[1618]: time="2026-01-14T01:10:01.890025368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:10:01.890360 containerd[1618]: time="2026-01-14T01:10:01.890183954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:01.890474 kubelet[2817]: E0114 01:10:01.890401 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:10:01.890474 kubelet[2817]: E0114 01:10:01.890463 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:10:01.891681 kubelet[2817]: E0114 01:10:01.890619 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj2ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b64b7c788-2crrf_calico-apiserver(31330de6-3f41-4f44-bd94-776d84913764): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:01.898291 kubelet[2817]: E0114 01:10:01.892537 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:10:05.739229 kubelet[2817]: E0114 01:10:05.736065 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:05.753167 kubelet[2817]: E0114 01:10:05.752692 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:10:06.742093 kubelet[2817]: E0114 01:10:06.742031 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:10:06.745628 containerd[1618]: time="2026-01-14T01:10:06.745264722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:10:06.845577 containerd[1618]: time="2026-01-14T01:10:06.845455641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:06.851488 containerd[1618]: time="2026-01-14T01:10:06.849319428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:10:06.851488 containerd[1618]: time="2026-01-14T01:10:06.849460262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:06.853511 kubelet[2817]: E0114 01:10:06.853404 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:10:06.853658 kubelet[2817]: E0114 01:10:06.853635 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:10:06.854880 kubelet[2817]: E0114 01:10:06.854723 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zmvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8zxlx_calico-system(f6cd6e89-0e6a-47aa-ae56-5f40f27190c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:06.858406 kubelet[2817]: E0114 01:10:06.857533 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:10:12.733494 kubelet[2817]: E0114 01:10:12.732511 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:10:12.735840 kubelet[2817]: E0114 01:10:12.735279 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:10:15.743233 kubelet[2817]: E0114 01:10:15.743095 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:10:17.745120 kubelet[2817]: E0114 01:10:17.744988 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:10:18.739028 kubelet[2817]: E0114 01:10:18.738539 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:10:19.736224 kubelet[2817]: E0114 01:10:19.736103 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:10:20.732234 kubelet[2817]: E0114 01:10:20.732094 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:26.735998 kubelet[2817]: E0114 01:10:26.735773 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:10:26.736847 kubelet[2817]: E0114 01:10:26.736011 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:10:26.739025 kubelet[2817]: E0114 01:10:26.737195 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:10:27.738361 kubelet[2817]: E0114 01:10:27.738261 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:29.769321 kubelet[2817]: E0114 01:10:29.769190 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:10:30.135218 systemd[1]: Started sshd@7-10.0.0.105:22-10.0.0.1:34906.service - OpenSSH per-connection server daemon (10.0.0.1:34906). Jan 14 01:10:30.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.105:22-10.0.0.1:34906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:30.150051 kernel: kauditd_printk_skb: 77 callbacks suppressed Jan 14 01:10:30.150154 kernel: audit: type=1130 audit(1768353030.135:733): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.105:22-10.0.0.1:34906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:30.519000 audit[5142]: USER_ACCT pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.520810 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 34906 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:10:30.525594 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:30.519000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.540595 systemd-logind[1593]: New session 9 of user core. Jan 14 01:10:30.553820 kernel: audit: type=1101 audit(1768353030.519:734): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.554800 kernel: audit: type=1103 audit(1768353030.519:735): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.555289 kernel: audit: type=1006 audit(1768353030.519:736): pid=5142 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 01:10:30.561651 kernel: audit: type=1300 audit(1768353030.519:736): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb7360180 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:30.519000 audit[5142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb7360180 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:30.584992 kernel: audit: type=1327 audit(1768353030.519:736): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:30.519000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:30.586779 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:10:30.604000 audit[5142]: USER_START pid=5142 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.612000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.661854 kernel: audit: type=1105 audit(1768353030.604:737): pid=5142 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.662269 kernel: audit: type=1103 audit(1768353030.612:738): pid=5146 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.930000 audit[5142]: USER_END pid=5142 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.936212 systemd[1]: sshd@7-10.0.0.105:22-10.0.0.1:34906.service: Deactivated successfully. Jan 14 01:10:30.937719 sshd[5146]: Connection closed by 10.0.0.1 port 34906 Jan 14 01:10:30.929784 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:30.938651 systemd-logind[1593]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:10:30.941692 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:10:30.946659 systemd-logind[1593]: Removed session 9. Jan 14 01:10:30.957047 kernel: audit: type=1106 audit(1768353030.930:739): pid=5142 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.931000 audit[5142]: CRED_DISP pid=5142 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.978085 kernel: audit: type=1104 audit(1768353030.931:740): pid=5142 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:30.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.105:22-10.0.0.1:34906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:32.737654 kubelet[2817]: E0114 01:10:32.736374 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:10:32.742966 containerd[1618]: time="2026-01-14T01:10:32.742820346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:10:32.835862 containerd[1618]: time="2026-01-14T01:10:32.835678343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:32.841743 containerd[1618]: time="2026-01-14T01:10:32.841311485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:10:32.841743 containerd[1618]: time="2026-01-14T01:10:32.841407585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:32.842116 kubelet[2817]: E0114 01:10:32.841783 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:10:32.842116 kubelet[2817]: E0114 01:10:32.841866 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:10:32.842390 kubelet[2817]: E0114 01:10:32.842263 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c8b989ab0e84e66925d2b86c9e93775,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wxjn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548b5c59b-jhbn9_calico-system(0a01ad60-6870-4247-94ac-0665fa604563): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:32.846342 containerd[1618]: time="2026-01-14T01:10:32.846161721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:10:32.933202 containerd[1618]: time="2026-01-14T01:10:32.932977653Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:32.937398 containerd[1618]: time="2026-01-14T01:10:32.937332085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:10:32.937567 containerd[1618]: time="2026-01-14T01:10:32.937435008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:32.937792 kubelet[2817]: E0114 01:10:32.937682 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:10:32.937792 kubelet[2817]: E0114 01:10:32.937778 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:10:32.939543 kubelet[2817]: E0114 01:10:32.939283 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxjn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548b5c59b-jhbn9_calico-system(0a01ad60-6870-4247-94ac-0665fa604563): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:32.941170 kubelet[2817]: E0114 01:10:32.940984 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:10:35.978486 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:10:35.978660 kernel: audit: type=1130 audit(1768353035.973:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.105:22-10.0.0.1:44288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.105:22-10.0.0.1:44288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:35.974681 systemd[1]: Started sshd@8-10.0.0.105:22-10.0.0.1:44288.service - OpenSSH per-connection server daemon (10.0.0.1:44288). Jan 14 01:10:36.100000 audit[5170]: USER_ACCT pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.130563 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 44288 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:10:36.132999 kernel: audit: type=1101 audit(1768353036.100:743): pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.136000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.142071 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:36.170024 kernel: audit: type=1103 audit(1768353036.136:744): pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.170149 kernel: audit: type=1006 audit(1768353036.137:745): pid=5170 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 01:10:36.162018 systemd-logind[1593]: New session 10 of user core. Jan 14 01:10:36.137000 audit[5170]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7ca1c600 a2=3 a3=0 items=0 ppid=1 pid=5170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:36.183676 kernel: audit: type=1300 audit(1768353036.137:745): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7ca1c600 a2=3 a3=0 items=0 ppid=1 pid=5170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:36.137000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:36.219967 kernel: audit: type=1327 audit(1768353036.137:745): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:36.228192 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:10:36.240000 audit[5170]: USER_START pid=5170 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.247000 audit[5174]: CRED_ACQ pid=5174 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.281331 kernel: audit: type=1105 audit(1768353036.240:746): pid=5170 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.283123 kernel: audit: type=1103 audit(1768353036.247:747): pid=5174 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.439059 sshd[5174]: Connection closed by 10.0.0.1 port 44288 Jan 14 01:10:36.436780 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:36.439000 audit[5170]: USER_END pid=5170 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.450629 systemd-logind[1593]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:10:36.452469 systemd[1]: sshd@8-10.0.0.105:22-10.0.0.1:44288.service: Deactivated successfully. Jan 14 01:10:36.457263 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:10:36.462273 systemd-logind[1593]: Removed session 10. Jan 14 01:10:36.439000 audit[5170]: CRED_DISP pid=5170 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.490705 kernel: audit: type=1106 audit(1768353036.439:748): pid=5170 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.490849 kernel: audit: type=1104 audit(1768353036.439:749): pid=5170 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:36.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.105:22-10.0.0.1:44288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:37.740234 kubelet[2817]: E0114 01:10:37.740062 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:10:37.747497 kubelet[2817]: E0114 01:10:37.746428 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:10:40.736189 kubelet[2817]: E0114 01:10:40.735726 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:10:41.466288 systemd[1]: Started sshd@9-10.0.0.105:22-10.0.0.1:44292.service - OpenSSH per-connection server daemon (10.0.0.1:44292). Jan 14 01:10:41.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.105:22-10.0.0.1:44292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:41.474280 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:10:41.474373 kernel: audit: type=1130 audit(1768353041.465:751): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.105:22-10.0.0.1:44292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:41.635000 audit[5188]: USER_ACCT pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:41.646082 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:41.647977 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 44292 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:10:41.643000 audit[5188]: CRED_ACQ pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:41.665762 systemd-logind[1593]: New session 11 of user core. Jan 14 01:10:41.674994 kernel: audit: type=1101 audit(1768353041.635:752): pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:41.675092 kernel: audit: type=1103 audit(1768353041.643:753): pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:41.675136 kernel: audit: type=1006 audit(1768353041.643:754): pid=5188 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:10:41.643000 audit[5188]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd62477dc0 a2=3 a3=0 items=0 ppid=1 pid=5188 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.703812 kernel: audit: type=1300 audit(1768353041.643:754): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd62477dc0 a2=3 a3=0 items=0 ppid=1 pid=5188 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:41.704035 kernel: audit: type=1327 audit(1768353041.643:754): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:41.643000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:41.704344 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:10:41.714000 audit[5188]: USER_START pid=5188 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:41.719000 audit[5192]: CRED_ACQ pid=5192 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:41.758337 kernel: audit: type=1105 audit(1768353041.714:755): pid=5188 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:41.758427 kernel: audit: type=1103 audit(1768353041.719:756): pid=5192 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:41.997706 sshd[5192]: Connection closed by 10.0.0.1 port 44292 Jan 14 01:10:41.998652 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:42.008000 audit[5188]: USER_END pid=5188 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:42.013000 audit[5188]: CRED_DISP pid=5188 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:42.041801 systemd[1]: sshd@9-10.0.0.105:22-10.0.0.1:44292.service: Deactivated successfully. Jan 14 01:10:42.045983 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:10:42.051117 kernel: audit: type=1106 audit(1768353042.008:757): pid=5188 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:42.051380 kernel: audit: type=1104 audit(1768353042.013:758): pid=5188 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:42.051975 systemd-logind[1593]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:10:42.053413 systemd-logind[1593]: Removed session 11. Jan 14 01:10:42.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.105:22-10.0.0.1:44292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:43.734491 kubelet[2817]: E0114 01:10:43.734359 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:10:43.737779 kubelet[2817]: E0114 01:10:43.737628 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:10:45.744124 kubelet[2817]: E0114 01:10:45.741052 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:45.751358 containerd[1618]: time="2026-01-14T01:10:45.747730801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:10:45.886336 containerd[1618]: time="2026-01-14T01:10:45.886090952Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:45.893344 containerd[1618]: time="2026-01-14T01:10:45.893234090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:10:45.893516 containerd[1618]: time="2026-01-14T01:10:45.893332186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:45.894184 kubelet[2817]: E0114 01:10:45.894068 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:10:45.894184 kubelet[2817]: E0114 01:10:45.894171 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:10:45.894519 kubelet[2817]: E0114 01:10:45.894306 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8x65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b64b7c788-f89f6_calico-apiserver(b1255f7d-606b-4b44-9160-25a609a72f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:45.897003 kubelet[2817]: E0114 01:10:45.896245 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:10:47.032509 systemd[1]: Started sshd@10-10.0.0.105:22-10.0.0.1:50986.service - OpenSSH per-connection server daemon (10.0.0.1:50986). Jan 14 01:10:47.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.105:22-10.0.0.1:50986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:47.038304 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:10:47.038408 kernel: audit: type=1130 audit(1768353047.032:760): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.105:22-10.0.0.1:50986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:47.217042 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 50986 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:10:47.215000 audit[5243]: USER_ACCT pid=5243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.227447 sshd-session[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:47.245971 kernel: audit: type=1101 audit(1768353047.215:761): pid=5243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.217000 audit[5243]: CRED_ACQ pid=5243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.268280 systemd-logind[1593]: New session 12 of user core. Jan 14 01:10:47.282346 kernel: audit: type=1103 audit(1768353047.217:762): pid=5243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.282481 kernel: audit: type=1006 audit(1768353047.224:763): pid=5243 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 01:10:47.282519 kernel: audit: type=1300 audit(1768353047.224:763): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd26589960 a2=3 a3=0 items=0 ppid=1 pid=5243 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:47.224000 audit[5243]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd26589960 a2=3 a3=0 items=0 ppid=1 pid=5243 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:47.316135 kernel: audit: type=1327 audit(1768353047.224:763): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:47.224000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:47.334738 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:10:47.345000 audit[5243]: USER_START pid=5243 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.388809 kernel: audit: type=1105 audit(1768353047.345:764): pid=5243 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.349000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.423043 kernel: audit: type=1103 audit(1768353047.349:765): pid=5247 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.640879 sshd[5247]: Connection closed by 10.0.0.1 port 50986 Jan 14 01:10:47.642217 sshd-session[5243]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:47.650000 audit[5243]: USER_END pid=5243 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.660478 systemd[1]: sshd@10-10.0.0.105:22-10.0.0.1:50986.service: Deactivated successfully. Jan 14 01:10:47.650000 audit[5243]: CRED_DISP pid=5243 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.670584 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:10:47.676545 systemd-logind[1593]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:10:47.679739 systemd-logind[1593]: Removed session 12. Jan 14 01:10:47.689236 kernel: audit: type=1106 audit(1768353047.650:766): pid=5243 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.689345 kernel: audit: type=1104 audit(1768353047.650:767): pid=5243 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:47.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.105:22-10.0.0.1:50986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:50.732984 kubelet[2817]: E0114 01:10:50.732827 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:10:50.737435 containerd[1618]: time="2026-01-14T01:10:50.737331769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:10:50.824838 containerd[1618]: time="2026-01-14T01:10:50.824778493Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:50.831260 containerd[1618]: time="2026-01-14T01:10:50.831145544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:10:50.831415 containerd[1618]: time="2026-01-14T01:10:50.831272010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:50.832527 kubelet[2817]: E0114 01:10:50.831579 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:10:50.832527 kubelet[2817]: E0114 01:10:50.831680 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:10:50.832527 kubelet[2817]: E0114 01:10:50.831976 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g5n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:50.833607 containerd[1618]: time="2026-01-14T01:10:50.833535385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:10:50.914263 containerd[1618]: time="2026-01-14T01:10:50.914184658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:50.920462 containerd[1618]: time="2026-01-14T01:10:50.920037883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:10:50.920462 containerd[1618]: time="2026-01-14T01:10:50.920139784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:50.920758 kubelet[2817]: E0114 01:10:50.920713 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:10:50.921337 kubelet[2817]: E0114 01:10:50.921302 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:10:50.921800 kubelet[2817]: E0114 01:10:50.921738 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77fd4f6b7c-sb5qb_calico-system(f73e61d7-350a-471c-9476-00cd84fadf64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:50.923324 containerd[1618]: time="2026-01-14T01:10:50.922335994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:10:50.925972 kubelet[2817]: E0114 01:10:50.925139 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:10:51.016523 containerd[1618]: time="2026-01-14T01:10:51.009958057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:51.022808 containerd[1618]: time="2026-01-14T01:10:51.022131454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:10:51.022808 containerd[1618]: time="2026-01-14T01:10:51.022263060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:51.023122 kubelet[2817]: E0114 01:10:51.022698 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:10:51.023122 kubelet[2817]: E0114 01:10:51.022826 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:10:51.027195 kubelet[2817]: E0114 01:10:51.023608 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g5n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:51.029661 kubelet[2817]: E0114 01:10:51.029165 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:10:52.685609 systemd[1]: Started sshd@11-10.0.0.105:22-10.0.0.1:47436.service - OpenSSH per-connection server daemon (10.0.0.1:47436). Jan 14 01:10:52.696039 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:10:52.696145 kernel: audit: type=1130 audit(1768353052.683:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.105:22-10.0.0.1:47436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:52.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.105:22-10.0.0.1:47436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:52.953000 audit[5275]: USER_ACCT pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:52.960506 sshd[5275]: Accepted publickey for core from 10.0.0.1 port 47436 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:10:52.971164 sshd-session[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:52.965000 audit[5275]: CRED_ACQ pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:52.988041 systemd-logind[1593]: New session 13 of user core. Jan 14 01:10:53.016995 kernel: audit: type=1101 audit(1768353052.953:770): pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:53.017133 kernel: audit: type=1103 audit(1768353052.965:771): pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:53.017170 kernel: audit: type=1006 audit(1768353052.965:772): pid=5275 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 01:10:53.030747 kernel: audit: type=1300 audit(1768353052.965:772): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc8448f80 a2=3 a3=0 items=0 ppid=1 pid=5275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:52.965000 audit[5275]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc8448f80 a2=3 a3=0 items=0 ppid=1 pid=5275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:52.965000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:53.071762 kernel: audit: type=1327 audit(1768353052.965:772): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:53.075373 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:10:53.089000 audit[5275]: USER_START pid=5275 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:53.100000 audit[5279]: CRED_ACQ pid=5279 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:53.150131 kernel: audit: type=1105 audit(1768353053.089:773): pid=5275 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:53.150419 kernel: audit: type=1103 audit(1768353053.100:774): pid=5279 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:53.697334 sshd[5279]: Connection closed by 10.0.0.1 port 47436 Jan 14 01:10:53.699310 sshd-session[5275]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:53.714000 audit[5275]: USER_END pid=5275 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:53.763789 kernel: audit: type=1106 audit(1768353053.714:775): pid=5275 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:53.767804 kernel: audit: type=1104 audit(1768353053.744:776): pid=5275 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:53.744000 audit[5275]: CRED_DISP pid=5275 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:53.776417 systemd-logind[1593]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:10:53.781767 systemd[1]: sshd@11-10.0.0.105:22-10.0.0.1:47436.service: Deactivated successfully. Jan 14 01:10:53.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.105:22-10.0.0.1:47436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:53.799151 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:10:53.804970 systemd-logind[1593]: Removed session 13. Jan 14 01:10:54.745080 containerd[1618]: time="2026-01-14T01:10:54.741621778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:10:54.875593 containerd[1618]: time="2026-01-14T01:10:54.875349702Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:54.879621 containerd[1618]: time="2026-01-14T01:10:54.879455921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:10:54.879965 containerd[1618]: time="2026-01-14T01:10:54.879938222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:54.880592 kubelet[2817]: E0114 01:10:54.880140 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:10:54.880592 kubelet[2817]: E0114 01:10:54.880191 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:10:54.880592 kubelet[2817]: E0114 01:10:54.880320 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj2ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b64b7c788-2crrf_calico-apiserver(31330de6-3f41-4f44-bd94-776d84913764): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:54.883302 kubelet[2817]: E0114 01:10:54.883243 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:10:56.738768 kubelet[2817]: E0114 01:10:56.738463 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:10:58.748015 systemd[1]: Started sshd@12-10.0.0.105:22-10.0.0.1:47444.service - OpenSSH per-connection server daemon (10.0.0.1:47444). Jan 14 01:10:58.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.105:22-10.0.0.1:47444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:58.755207 containerd[1618]: time="2026-01-14T01:10:58.753788799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:10:58.760457 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:10:58.760532 kernel: audit: type=1130 audit(1768353058.747:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.105:22-10.0.0.1:47444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:58.766212 kubelet[2817]: E0114 01:10:58.766130 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:10:58.857975 containerd[1618]: time="2026-01-14T01:10:58.857462438Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:10:58.871110 containerd[1618]: time="2026-01-14T01:10:58.871014059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:10:58.871259 containerd[1618]: time="2026-01-14T01:10:58.871161544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:10:58.878855 kubelet[2817]: E0114 01:10:58.876027 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:10:58.878855 kubelet[2817]: E0114 01:10:58.876095 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:10:58.878855 kubelet[2817]: E0114 01:10:58.876259 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zmvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8zxlx_calico-system(f6cd6e89-0e6a-47aa-ae56-5f40f27190c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:10:58.880426 kubelet[2817]: E0114 01:10:58.880282 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:10:59.166136 sshd[5293]: Accepted publickey for core from 10.0.0.1 port 47444 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:10:59.164000 audit[5293]: USER_ACCT pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.174395 sshd-session[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:59.186596 kernel: audit: type=1101 audit(1768353059.164:779): pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.210976 systemd-logind[1593]: New session 14 of user core. Jan 14 01:10:59.165000 audit[5293]: CRED_ACQ pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.252957 kernel: audit: type=1103 audit(1768353059.165:780): pid=5293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.258572 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:10:59.285842 kernel: audit: type=1006 audit(1768353059.165:781): pid=5293 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 01:10:59.165000 audit[5293]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcf477f80 a2=3 a3=0 items=0 ppid=1 pid=5293 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:59.165000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:59.348332 kernel: audit: type=1300 audit(1768353059.165:781): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcf477f80 a2=3 a3=0 items=0 ppid=1 pid=5293 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:59.348469 kernel: audit: type=1327 audit(1768353059.165:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:59.348520 kernel: audit: type=1105 audit(1768353059.279:782): pid=5293 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.279000 audit[5293]: USER_START pid=5293 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.288000 audit[5297]: CRED_ACQ pid=5297 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.415732 kernel: audit: type=1103 audit(1768353059.288:783): pid=5297 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.670800 sshd[5297]: Connection closed by 10.0.0.1 port 47444 Jan 14 01:10:59.668408 sshd-session[5293]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:59.672000 audit[5293]: USER_END pid=5293 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.687855 systemd-logind[1593]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:10:59.689170 systemd[1]: sshd@12-10.0.0.105:22-10.0.0.1:47444.service: Deactivated successfully. Jan 14 01:10:59.705417 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:10:59.672000 audit[5293]: CRED_DISP pid=5293 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.712139 systemd-logind[1593]: Removed session 14. Jan 14 01:10:59.731560 kernel: audit: type=1106 audit(1768353059.672:784): pid=5293 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.731640 kernel: audit: type=1104 audit(1768353059.672:785): pid=5293 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:10:59.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.105:22-10.0.0.1:47444 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:01.744963 kubelet[2817]: E0114 01:11:01.738579 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:11:02.741878 kubelet[2817]: E0114 01:11:02.741822 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:11:04.706092 systemd[1]: Started sshd@13-10.0.0.105:22-10.0.0.1:38846.service - OpenSSH per-connection server daemon (10.0.0.1:38846). Jan 14 01:11:04.740026 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:04.740143 kernel: audit: type=1130 audit(1768353064.706:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.105:22-10.0.0.1:38846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:04.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.105:22-10.0.0.1:38846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:04.741041 kubelet[2817]: E0114 01:11:04.741009 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:11:04.912000 audit[5312]: USER_ACCT pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:04.915608 sshd[5312]: Accepted publickey for core from 10.0.0.1 port 38846 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:04.919634 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:04.912000 audit[5312]: CRED_ACQ pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:04.954226 systemd-logind[1593]: New session 15 of user core. Jan 14 01:11:04.980857 kernel: audit: type=1101 audit(1768353064.912:788): pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:04.981173 kernel: audit: type=1103 audit(1768353064.912:789): pid=5312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:04.995337 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:11:04.912000 audit[5312]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0a7ffa10 a2=3 a3=0 items=0 ppid=1 pid=5312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:05.039801 kernel: audit: type=1006 audit(1768353064.912:790): pid=5312 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 01:11:05.039985 kernel: audit: type=1300 audit(1768353064.912:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0a7ffa10 a2=3 a3=0 items=0 ppid=1 pid=5312 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:05.040035 kernel: audit: type=1327 audit(1768353064.912:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:04.912000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:05.030000 audit[5312]: USER_START pid=5312 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.080847 kernel: audit: type=1105 audit(1768353065.030:791): pid=5312 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.081046 kernel: audit: type=1103 audit(1768353065.052:792): pid=5316 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.052000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.424473 sshd[5316]: Connection closed by 10.0.0.1 port 38846 Jan 14 01:11:05.424819 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:05.431000 audit[5312]: USER_END pid=5312 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.449671 systemd[1]: sshd@13-10.0.0.105:22-10.0.0.1:38846.service: Deactivated successfully. Jan 14 01:11:05.431000 audit[5312]: CRED_DISP pid=5312 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.465001 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:11:05.468122 systemd-logind[1593]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:11:05.478377 kernel: audit: type=1106 audit(1768353065.431:793): pid=5312 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.478481 kernel: audit: type=1104 audit(1768353065.431:794): pid=5312 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:05.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.105:22-10.0.0.1:38846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:05.480442 systemd-logind[1593]: Removed session 15. Jan 14 01:11:08.739063 kubelet[2817]: E0114 01:11:08.736529 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:11:08.739063 kubelet[2817]: E0114 01:11:08.737090 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:11:08.739063 kubelet[2817]: E0114 01:11:08.737185 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:11:10.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.105:22-10.0.0.1:38852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:10.464719 systemd[1]: Started sshd@14-10.0.0.105:22-10.0.0.1:38852.service - OpenSSH per-connection server daemon (10.0.0.1:38852). Jan 14 01:11:10.471960 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:10.472038 kernel: audit: type=1130 audit(1768353070.467:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.105:22-10.0.0.1:38852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:10.754000 audit[5333]: USER_ACCT pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:10.755964 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 38852 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:10.768350 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:10.780451 kernel: audit: type=1101 audit(1768353070.754:797): pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:10.757000 audit[5333]: CRED_ACQ pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:10.796047 kernel: audit: type=1103 audit(1768353070.757:798): pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:10.796207 kernel: audit: type=1006 audit(1768353070.757:799): pid=5333 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 01:11:10.803453 systemd-logind[1593]: New session 16 of user core. Jan 14 01:11:10.757000 audit[5333]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe53c3bd60 a2=3 a3=0 items=0 ppid=1 pid=5333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:10.757000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:10.859782 kernel: audit: type=1300 audit(1768353070.757:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe53c3bd60 a2=3 a3=0 items=0 ppid=1 pid=5333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:10.859878 kernel: audit: type=1327 audit(1768353070.757:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:10.861662 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:11:10.882000 audit[5333]: USER_START pid=5333 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:10.942582 kernel: audit: type=1105 audit(1768353070.882:800): pid=5333 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:10.885000 audit[5337]: CRED_ACQ pid=5337 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:10.975136 kernel: audit: type=1103 audit(1768353070.885:801): pid=5337 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:11.326142 sshd[5337]: Connection closed by 10.0.0.1 port 38852 Jan 14 01:11:11.325825 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:11.326000 audit[5333]: USER_END pid=5333 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:11.347518 systemd[1]: sshd@14-10.0.0.105:22-10.0.0.1:38852.service: Deactivated successfully. Jan 14 01:11:11.355864 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:11:11.326000 audit[5333]: CRED_DISP pid=5333 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:11.363523 systemd-logind[1593]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:11:11.381658 systemd-logind[1593]: Removed session 16. Jan 14 01:11:11.398580 kernel: audit: type=1106 audit(1768353071.326:802): pid=5333 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:11.398663 kernel: audit: type=1104 audit(1768353071.326:803): pid=5333 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:11.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.105:22-10.0.0.1:38852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:11.747019 kubelet[2817]: E0114 01:11:11.745334 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:11:11.752109 kubelet[2817]: E0114 01:11:11.752072 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:11:11.756112 kubelet[2817]: E0114 01:11:11.756072 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:11:16.357068 systemd[1]: Started sshd@15-10.0.0.105:22-10.0.0.1:34538.service - OpenSSH per-connection server daemon (10.0.0.1:34538). Jan 14 01:11:16.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.105:22-10.0.0.1:34538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:16.365296 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:16.365750 kernel: audit: type=1130 audit(1768353076.355:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.105:22-10.0.0.1:34538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:16.547000 audit[5381]: USER_ACCT pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:16.549097 sshd[5381]: Accepted publickey for core from 10.0.0.1 port 34538 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:16.556573 sshd-session[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:16.553000 audit[5381]: CRED_ACQ pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:16.569066 systemd-logind[1593]: New session 17 of user core. Jan 14 01:11:16.589632 kernel: audit: type=1101 audit(1768353076.547:806): pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:16.589726 kernel: audit: type=1103 audit(1768353076.553:807): pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:16.589762 kernel: audit: type=1006 audit(1768353076.553:808): pid=5381 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 01:11:16.553000 audit[5381]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd95625280 a2=3 a3=0 items=0 ppid=1 pid=5381 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:16.648934 kernel: audit: type=1300 audit(1768353076.553:808): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd95625280 a2=3 a3=0 items=0 ppid=1 pid=5381 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:16.553000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:16.659240 kernel: audit: type=1327 audit(1768353076.553:808): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:16.659617 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:11:16.674000 audit[5381]: USER_START pid=5381 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:16.713962 kernel: audit: type=1105 audit(1768353076.674:809): pid=5381 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:16.714097 kernel: audit: type=1103 audit(1768353076.679:810): pid=5385 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:16.679000 audit[5385]: CRED_ACQ pid=5385 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:16.755222 kubelet[2817]: E0114 01:11:16.755126 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:11:16.755726 kubelet[2817]: E0114 01:11:16.755284 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:11:17.004221 sshd[5385]: Connection closed by 10.0.0.1 port 34538 Jan 14 01:11:17.006153 sshd-session[5381]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:17.013000 audit[5381]: USER_END pid=5381 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:17.026375 systemd[1]: sshd@15-10.0.0.105:22-10.0.0.1:34538.service: Deactivated successfully. Jan 14 01:11:17.046074 kernel: audit: type=1106 audit(1768353077.013:811): pid=5381 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:17.041642 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:11:17.013000 audit[5381]: CRED_DISP pid=5381 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:17.049879 systemd-logind[1593]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:11:17.055207 systemd-logind[1593]: Removed session 17. Jan 14 01:11:17.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.105:22-10.0.0.1:34538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:17.064045 kernel: audit: type=1104 audit(1768353077.013:812): pid=5381 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:19.749416 kubelet[2817]: E0114 01:11:19.749313 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:11:21.745159 kubelet[2817]: E0114 01:11:21.743345 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:11:22.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.105:22-10.0.0.1:34554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:22.053304 systemd[1]: Started sshd@16-10.0.0.105:22-10.0.0.1:34554.service - OpenSSH per-connection server daemon (10.0.0.1:34554). Jan 14 01:11:22.063027 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:22.063364 kernel: audit: type=1130 audit(1768353082.051:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.105:22-10.0.0.1:34554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:22.207000 audit[5401]: USER_ACCT pid=5401 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.216000 sshd[5401]: Accepted publickey for core from 10.0.0.1 port 34554 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:22.216745 sshd-session[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:22.243404 systemd-logind[1593]: New session 18 of user core. Jan 14 01:11:22.211000 audit[5401]: CRED_ACQ pid=5401 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.263095 kernel: audit: type=1101 audit(1768353082.207:815): pid=5401 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.263278 kernel: audit: type=1103 audit(1768353082.211:816): pid=5401 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.263337 kernel: audit: type=1006 audit(1768353082.211:817): pid=5401 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 01:11:22.276996 kernel: audit: type=1300 audit(1768353082.211:817): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbc3a6b60 a2=3 a3=0 items=0 ppid=1 pid=5401 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:22.211000 audit[5401]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbc3a6b60 a2=3 a3=0 items=0 ppid=1 pid=5401 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:22.278116 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:11:22.307009 kernel: audit: type=1327 audit(1768353082.211:817): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:22.211000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:22.325019 kernel: audit: type=1105 audit(1768353082.288:818): pid=5401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.288000 audit[5401]: USER_START pid=5401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.296000 audit[5405]: CRED_ACQ pid=5405 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.397024 kernel: audit: type=1103 audit(1768353082.296:819): pid=5405 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.657278 sshd[5405]: Connection closed by 10.0.0.1 port 34554 Jan 14 01:11:22.655077 sshd-session[5401]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:22.659000 audit[5401]: USER_END pid=5401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.674555 systemd[1]: sshd@16-10.0.0.105:22-10.0.0.1:34554.service: Deactivated successfully. Jan 14 01:11:22.679041 systemd-logind[1593]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:11:22.683657 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:11:22.690516 systemd-logind[1593]: Removed session 18. Jan 14 01:11:22.660000 audit[5401]: CRED_DISP pid=5401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.712574 kernel: audit: type=1106 audit(1768353082.659:820): pid=5401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.712665 kernel: audit: type=1104 audit(1768353082.660:821): pid=5401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:22.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.105:22-10.0.0.1:34554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:23.744390 kubelet[2817]: E0114 01:11:23.744150 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:11:25.754615 kubelet[2817]: E0114 01:11:25.754441 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:11:27.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.105:22-10.0.0.1:51412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:27.700343 systemd[1]: Started sshd@17-10.0.0.105:22-10.0.0.1:51412.service - OpenSSH per-connection server daemon (10.0.0.1:51412). Jan 14 01:11:27.707011 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:27.707087 kernel: audit: type=1130 audit(1768353087.698:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.105:22-10.0.0.1:51412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:27.733270 kubelet[2817]: E0114 01:11:27.733074 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:11:27.870427 sshd[5419]: Accepted publickey for core from 10.0.0.1 port 51412 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:27.869000 audit[5419]: USER_ACCT pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:27.878434 sshd-session[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:27.897751 kernel: audit: type=1101 audit(1768353087.869:824): pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:27.900061 kernel: audit: type=1103 audit(1768353087.874:825): pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:27.874000 audit[5419]: CRED_ACQ pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:27.909032 systemd-logind[1593]: New session 19 of user core. Jan 14 01:11:27.874000 audit[5419]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe811b29e0 a2=3 a3=0 items=0 ppid=1 pid=5419 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:27.947427 kernel: audit: type=1006 audit(1768353087.874:826): pid=5419 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 01:11:27.947599 kernel: audit: type=1300 audit(1768353087.874:826): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe811b29e0 a2=3 a3=0 items=0 ppid=1 pid=5419 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:27.948333 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:11:27.952152 kernel: audit: type=1327 audit(1768353087.874:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:27.874000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:27.966000 audit[5419]: USER_START pid=5419 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:27.994969 kernel: audit: type=1105 audit(1768353087.966:827): pid=5419 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:27.995081 kernel: audit: type=1103 audit(1768353087.978:828): pid=5423 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:27.978000 audit[5423]: CRED_ACQ pid=5423 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:28.213208 sshd[5423]: Connection closed by 10.0.0.1 port 51412 Jan 14 01:11:28.214106 sshd-session[5419]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:28.229000 audit[5419]: USER_END pid=5419 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:28.253509 systemd[1]: sshd@17-10.0.0.105:22-10.0.0.1:51412.service: Deactivated successfully. Jan 14 01:11:28.267124 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:11:28.270039 kernel: audit: type=1106 audit(1768353088.229:829): pid=5419 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:28.229000 audit[5419]: CRED_DISP pid=5419 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:28.283425 systemd-logind[1593]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:11:28.288069 systemd-logind[1593]: Removed session 19. Jan 14 01:11:28.295802 kernel: audit: type=1104 audit(1768353088.229:830): pid=5419 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:28.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.105:22-10.0.0.1:51412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:31.737100 kubelet[2817]: E0114 01:11:31.736610 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:11:32.736813 kubelet[2817]: E0114 01:11:32.735132 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:11:33.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.105:22-10.0.0.1:53728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:33.246071 systemd[1]: Started sshd@18-10.0.0.105:22-10.0.0.1:53728.service - OpenSSH per-connection server daemon (10.0.0.1:53728). Jan 14 01:11:33.252029 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:33.252261 kernel: audit: type=1130 audit(1768353093.245:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.105:22-10.0.0.1:53728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:33.487000 audit[5438]: USER_ACCT pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.520288 kernel: audit: type=1101 audit(1768353093.487:833): pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.526325 sshd[5438]: Accepted publickey for core from 10.0.0.1 port 53728 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:33.540459 sshd-session[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:33.537000 audit[5438]: CRED_ACQ pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.562850 kernel: audit: type=1103 audit(1768353093.537:834): pid=5438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.570355 systemd-logind[1593]: New session 20 of user core. Jan 14 01:11:33.571227 kernel: audit: type=1006 audit(1768353093.537:835): pid=5438 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 01:11:33.571263 kernel: audit: type=1300 audit(1768353093.537:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8ba289c0 a2=3 a3=0 items=0 ppid=1 pid=5438 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:33.537000 audit[5438]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8ba289c0 a2=3 a3=0 items=0 ppid=1 pid=5438 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:33.589830 kernel: audit: type=1327 audit(1768353093.537:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:33.537000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:33.590226 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:11:33.597000 audit[5438]: USER_START pid=5438 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.601000 audit[5442]: CRED_ACQ pid=5442 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.658167 kernel: audit: type=1105 audit(1768353093.597:836): pid=5438 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.658283 kernel: audit: type=1103 audit(1768353093.601:837): pid=5442 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.812518 sshd[5442]: Connection closed by 10.0.0.1 port 53728 Jan 14 01:11:33.815070 sshd-session[5438]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:33.817000 audit[5438]: USER_END pid=5438 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.822627 systemd[1]: sshd@18-10.0.0.105:22-10.0.0.1:53728.service: Deactivated successfully. Jan 14 01:11:33.823793 systemd-logind[1593]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:11:33.837193 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:11:33.841496 systemd-logind[1593]: Removed session 20. Jan 14 01:11:33.860371 kernel: audit: type=1106 audit(1768353093.817:838): pid=5438 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.860534 kernel: audit: type=1104 audit(1768353093.817:839): pid=5438 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.817000 audit[5438]: CRED_DISP pid=5438 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:33.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.105:22-10.0.0.1:53728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:34.733224 kubelet[2817]: E0114 01:11:34.732685 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:11:35.741177 kubelet[2817]: E0114 01:11:35.741051 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:11:37.741991 kubelet[2817]: E0114 01:11:37.740856 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:11:38.737486 kubelet[2817]: E0114 01:11:38.737321 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:11:38.846294 systemd[1]: Started sshd@19-10.0.0.105:22-10.0.0.1:53742.service - OpenSSH per-connection server daemon (10.0.0.1:53742). Jan 14 01:11:38.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.105:22-10.0.0.1:53742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:38.851002 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:38.851076 kernel: audit: type=1130 audit(1768353098.845:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.105:22-10.0.0.1:53742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:39.004000 audit[5457]: USER_ACCT pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.013274 sshd[5457]: Accepted publickey for core from 10.0.0.1 port 53742 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:39.019647 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:39.016000 audit[5457]: CRED_ACQ pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.042854 kernel: audit: type=1101 audit(1768353099.004:842): pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.043070 kernel: audit: type=1103 audit(1768353099.016:843): pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.043102 kernel: audit: type=1006 audit(1768353099.017:844): pid=5457 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 01:11:39.052806 kernel: audit: type=1300 audit(1768353099.017:844): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc91c30fa0 a2=3 a3=0 items=0 ppid=1 pid=5457 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.017000 audit[5457]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc91c30fa0 a2=3 a3=0 items=0 ppid=1 pid=5457 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:39.048357 systemd-logind[1593]: New session 21 of user core. Jan 14 01:11:39.017000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:39.069398 kernel: audit: type=1327 audit(1768353099.017:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:39.070272 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:11:39.077000 audit[5457]: USER_START pid=5457 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.097200 kernel: audit: type=1105 audit(1768353099.077:845): pid=5457 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.082000 audit[5461]: CRED_ACQ pid=5461 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.117023 kernel: audit: type=1103 audit(1768353099.082:846): pid=5461 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.261793 sshd[5461]: Connection closed by 10.0.0.1 port 53742 Jan 14 01:11:39.263194 sshd-session[5457]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:39.265000 audit[5457]: USER_END pid=5457 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.275261 systemd[1]: sshd@19-10.0.0.105:22-10.0.0.1:53742.service: Deactivated successfully. Jan 14 01:11:39.282296 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:11:39.285064 systemd-logind[1593]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:11:39.287267 systemd-logind[1593]: Removed session 21. Jan 14 01:11:39.297044 kernel: audit: type=1106 audit(1768353099.265:847): pid=5457 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.297142 kernel: audit: type=1104 audit(1768353099.265:848): pid=5457 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.265000 audit[5457]: CRED_DISP pid=5457 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:39.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.105:22-10.0.0.1:53742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:39.740545 kubelet[2817]: E0114 01:11:39.740129 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:11:44.296030 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:44.296240 kernel: audit: type=1130 audit(1768353104.289:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.105:22-10.0.0.1:60486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:44.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.105:22-10.0.0.1:60486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:44.290679 systemd[1]: Started sshd@20-10.0.0.105:22-10.0.0.1:60486.service - OpenSSH per-connection server daemon (10.0.0.1:60486). Jan 14 01:11:44.435400 sshd[5505]: Accepted publickey for core from 10.0.0.1 port 60486 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:44.434000 audit[5505]: USER_ACCT pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.448349 kernel: audit: type=1101 audit(1768353104.434:851): pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.451000 audit[5505]: CRED_ACQ pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.456585 sshd-session[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:44.474402 systemd-logind[1593]: New session 22 of user core. Jan 14 01:11:44.479421 kernel: audit: type=1103 audit(1768353104.451:852): pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.479518 kernel: audit: type=1006 audit(1768353104.451:853): pid=5505 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:11:44.451000 audit[5505]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd360152a0 a2=3 a3=0 items=0 ppid=1 pid=5505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:44.451000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:44.497053 kernel: audit: type=1300 audit(1768353104.451:853): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd360152a0 a2=3 a3=0 items=0 ppid=1 pid=5505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:44.497213 kernel: audit: type=1327 audit(1768353104.451:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:44.508267 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:11:44.532000 audit[5505]: USER_START pid=5505 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.567410 kernel: audit: type=1105 audit(1768353104.532:854): pid=5505 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.573000 audit[5509]: CRED_ACQ pid=5509 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.593976 kernel: audit: type=1103 audit(1768353104.573:855): pid=5509 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.738515 kubelet[2817]: E0114 01:11:44.736463 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:11:44.740619 kubelet[2817]: E0114 01:11:44.740413 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:11:44.808348 sshd[5509]: Connection closed by 10.0.0.1 port 60486 Jan 14 01:11:44.809176 sshd-session[5505]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:44.813000 audit[5505]: USER_END pid=5505 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.822726 systemd[1]: sshd@20-10.0.0.105:22-10.0.0.1:60486.service: Deactivated successfully. Jan 14 01:11:44.838846 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:11:44.814000 audit[5505]: CRED_DISP pid=5505 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.849186 systemd-logind[1593]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:11:44.851693 systemd-logind[1593]: Removed session 22. Jan 14 01:11:44.858071 kernel: audit: type=1106 audit(1768353104.813:856): pid=5505 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.858171 kernel: audit: type=1104 audit(1768353104.814:857): pid=5505 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:44.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.105:22-10.0.0.1:60486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:45.736665 kubelet[2817]: E0114 01:11:45.736514 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:11:46.734046 kubelet[2817]: E0114 01:11:46.733948 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:11:48.733275 kubelet[2817]: E0114 01:11:48.733163 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:11:49.738266 kubelet[2817]: E0114 01:11:49.737361 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:11:49.826758 systemd[1]: Started sshd@21-10.0.0.105:22-10.0.0.1:60498.service - OpenSSH per-connection server daemon (10.0.0.1:60498). Jan 14 01:11:49.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.105:22-10.0.0.1:60498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:49.829211 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:11:49.829277 kernel: audit: type=1130 audit(1768353109.826:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.105:22-10.0.0.1:60498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:49.919000 audit[5523]: USER_ACCT pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:49.934699 systemd-logind[1593]: New session 23 of user core. Jan 14 01:11:49.938102 kernel: audit: type=1101 audit(1768353109.919:860): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:49.926539 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:49.938547 sshd[5523]: Accepted publickey for core from 10.0.0.1 port 60498 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:49.924000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:49.951323 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:11:49.957398 kernel: audit: type=1103 audit(1768353109.924:861): pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:49.957579 kernel: audit: type=1006 audit(1768353109.924:862): pid=5523 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 01:11:49.957617 kernel: audit: type=1300 audit(1768353109.924:862): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc69a8d110 a2=3 a3=0 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:49.924000 audit[5523]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc69a8d110 a2=3 a3=0 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:49.924000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:49.974129 kernel: audit: type=1327 audit(1768353109.924:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:49.974221 kernel: audit: type=1105 audit(1768353109.958:863): pid=5523 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:49.958000 audit[5523]: USER_START pid=5523 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:49.961000 audit[5527]: CRED_ACQ pid=5527 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:49.995668 kernel: audit: type=1103 audit(1768353109.961:864): pid=5527 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.075335 sshd[5527]: Connection closed by 10.0.0.1 port 60498 Jan 14 01:11:50.075751 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:50.076000 audit[5523]: USER_END pid=5523 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.076000 audit[5523]: CRED_DISP pid=5523 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.100579 kernel: audit: type=1106 audit(1768353110.076:865): pid=5523 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.100682 kernel: audit: type=1104 audit(1768353110.076:866): pid=5523 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.107358 systemd[1]: sshd@21-10.0.0.105:22-10.0.0.1:60498.service: Deactivated successfully. Jan 14 01:11:50.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.105:22-10.0.0.1:60498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:50.111154 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:11:50.112508 systemd-logind[1593]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:11:50.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.105:22-10.0.0.1:60504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:50.116716 systemd[1]: Started sshd@22-10.0.0.105:22-10.0.0.1:60504.service - OpenSSH per-connection server daemon (10.0.0.1:60504). Jan 14 01:11:50.119529 systemd-logind[1593]: Removed session 23. Jan 14 01:11:50.188000 audit[5541]: USER_ACCT pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.190171 sshd[5541]: Accepted publickey for core from 10.0.0.1 port 60504 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:50.190000 audit[5541]: CRED_ACQ pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.190000 audit[5541]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9be15000 a2=3 a3=0 items=0 ppid=1 pid=5541 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:50.190000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:50.193220 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:50.201476 systemd-logind[1593]: New session 24 of user core. Jan 14 01:11:50.209262 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:11:50.213000 audit[5541]: USER_START pid=5541 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.216000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.367201 sshd[5545]: Connection closed by 10.0.0.1 port 60504 Jan 14 01:11:50.367580 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:50.372000 audit[5541]: USER_END pid=5541 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.372000 audit[5541]: CRED_DISP pid=5541 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.379504 systemd[1]: sshd@22-10.0.0.105:22-10.0.0.1:60504.service: Deactivated successfully. Jan 14 01:11:50.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.105:22-10.0.0.1:60504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:50.382176 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:11:50.386124 systemd-logind[1593]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:11:50.393341 systemd[1]: Started sshd@23-10.0.0.105:22-10.0.0.1:60520.service - OpenSSH per-connection server daemon (10.0.0.1:60520). Jan 14 01:11:50.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.105:22-10.0.0.1:60520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:50.397183 systemd-logind[1593]: Removed session 24. Jan 14 01:11:50.491000 audit[5556]: USER_ACCT pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.495821 sshd[5556]: Accepted publickey for core from 10.0.0.1 port 60520 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:50.495000 audit[5556]: CRED_ACQ pid=5556 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.495000 audit[5556]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4c958ce0 a2=3 a3=0 items=0 ppid=1 pid=5556 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:50.495000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:50.499225 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:50.509497 systemd-logind[1593]: New session 25 of user core. Jan 14 01:11:50.517227 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:11:50.524000 audit[5556]: USER_START pid=5556 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.528000 audit[5560]: CRED_ACQ pid=5560 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.646357 sshd[5560]: Connection closed by 10.0.0.1 port 60520 Jan 14 01:11:50.647333 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:50.648000 audit[5556]: USER_END pid=5556 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.648000 audit[5556]: CRED_DISP pid=5556 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:50.653373 systemd[1]: sshd@23-10.0.0.105:22-10.0.0.1:60520.service: Deactivated successfully. Jan 14 01:11:50.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.105:22-10.0.0.1:60520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:50.656203 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:11:50.658370 systemd-logind[1593]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:11:50.661405 systemd-logind[1593]: Removed session 25. Jan 14 01:11:50.732216 kubelet[2817]: E0114 01:11:50.731701 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:11:50.734850 kubelet[2817]: E0114 01:11:50.734739 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:11:53.733288 kubelet[2817]: E0114 01:11:53.733153 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:11:55.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.105:22-10.0.0.1:55544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:55.667237 systemd[1]: Started sshd@24-10.0.0.105:22-10.0.0.1:55544.service - OpenSSH per-connection server daemon (10.0.0.1:55544). Jan 14 01:11:55.669879 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 01:11:55.669997 kernel: audit: type=1130 audit(1768353115.666:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.105:22-10.0.0.1:55544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:55.759000 audit[5579]: USER_ACCT pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:55.763234 sshd[5579]: Accepted publickey for core from 10.0.0.1 port 55544 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:11:55.763832 sshd-session[5579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:11:55.761000 audit[5579]: CRED_ACQ pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:55.780571 systemd-logind[1593]: New session 26 of user core. Jan 14 01:11:55.784074 kernel: audit: type=1101 audit(1768353115.759:887): pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:55.784152 kernel: audit: type=1103 audit(1768353115.761:888): pid=5579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:55.784186 kernel: audit: type=1006 audit(1768353115.761:889): pid=5579 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 01:11:55.789988 kernel: audit: type=1300 audit(1768353115.761:889): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe68bbe890 a2=3 a3=0 items=0 ppid=1 pid=5579 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:55.761000 audit[5579]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe68bbe890 a2=3 a3=0 items=0 ppid=1 pid=5579 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:55.761000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:55.805674 kernel: audit: type=1327 audit(1768353115.761:889): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:11:55.806745 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 01:11:55.810000 audit[5579]: USER_START pid=5579 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:55.813000 audit[5583]: CRED_ACQ pid=5583 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:55.838120 kernel: audit: type=1105 audit(1768353115.810:890): pid=5579 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:55.838226 kernel: audit: type=1103 audit(1768353115.813:891): pid=5583 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:56.001208 sshd[5583]: Connection closed by 10.0.0.1 port 55544 Jan 14 01:11:56.001137 sshd-session[5579]: pam_unix(sshd:session): session closed for user core Jan 14 01:11:56.004000 audit[5579]: USER_END pid=5579 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:56.012241 systemd[1]: sshd@24-10.0.0.105:22-10.0.0.1:55544.service: Deactivated successfully. Jan 14 01:11:56.016100 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 01:11:56.017976 systemd-logind[1593]: Session 26 logged out. Waiting for processes to exit. Jan 14 01:11:56.004000 audit[5579]: CRED_DISP pid=5579 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:56.021597 systemd-logind[1593]: Removed session 26. Jan 14 01:11:56.030418 kernel: audit: type=1106 audit(1768353116.004:892): pid=5579 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:56.030488 kernel: audit: type=1104 audit(1768353116.004:893): pid=5579 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:11:56.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.105:22-10.0.0.1:55544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:58.737635 kubelet[2817]: E0114 01:11:58.737436 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:12:00.741965 kubelet[2817]: E0114 01:12:00.740718 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:12:01.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.105:22-10.0.0.1:55552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:01.026562 systemd[1]: Started sshd@25-10.0.0.105:22-10.0.0.1:55552.service - OpenSSH per-connection server daemon (10.0.0.1:55552). Jan 14 01:12:01.049280 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:01.049356 kernel: audit: type=1130 audit(1768353121.024:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.105:22-10.0.0.1:55552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:01.182000 audit[5597]: USER_ACCT pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.183757 sshd[5597]: Accepted publickey for core from 10.0.0.1 port 55552 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:01.191498 sshd-session[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:01.188000 audit[5597]: CRED_ACQ pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.203392 systemd-logind[1593]: New session 27 of user core. Jan 14 01:12:01.208673 kernel: audit: type=1101 audit(1768353121.182:896): pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.208854 kernel: audit: type=1103 audit(1768353121.188:897): pid=5597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.209326 kernel: audit: type=1006 audit(1768353121.188:898): pid=5597 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 01:12:01.188000 audit[5597]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4a324c50 a2=3 a3=0 items=0 ppid=1 pid=5597 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:01.251269 kernel: audit: type=1300 audit(1768353121.188:898): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4a324c50 a2=3 a3=0 items=0 ppid=1 pid=5597 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:01.188000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:01.252676 kernel: audit: type=1327 audit(1768353121.188:898): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:01.261846 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 01:12:01.271000 audit[5597]: USER_START pid=5597 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.274000 audit[5601]: CRED_ACQ pid=5601 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.306027 kernel: audit: type=1105 audit(1768353121.271:899): pid=5597 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.306164 kernel: audit: type=1103 audit(1768353121.274:900): pid=5601 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.517573 sshd[5601]: Connection closed by 10.0.0.1 port 55552 Jan 14 01:12:01.518207 sshd-session[5597]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:01.525000 audit[5597]: USER_END pid=5597 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.546856 systemd[1]: sshd@25-10.0.0.105:22-10.0.0.1:55552.service: Deactivated successfully. Jan 14 01:12:01.559158 kernel: audit: type=1106 audit(1768353121.525:901): pid=5597 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.552546 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 01:12:01.556830 systemd-logind[1593]: Session 27 logged out. Waiting for processes to exit. Jan 14 01:12:01.560710 systemd-logind[1593]: Removed session 27. Jan 14 01:12:01.525000 audit[5597]: CRED_DISP pid=5597 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.105:22-10.0.0.1:55552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:01.574028 kernel: audit: type=1104 audit(1768353121.525:902): pid=5597 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:01.752501 kubelet[2817]: E0114 01:12:01.750854 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:12:01.753287 kubelet[2817]: E0114 01:12:01.752961 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:12:04.733715 kubelet[2817]: E0114 01:12:04.733544 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:12:04.734463 containerd[1618]: time="2026-01-14T01:12:04.734159356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:12:04.818638 containerd[1618]: time="2026-01-14T01:12:04.818401433Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:04.825681 containerd[1618]: time="2026-01-14T01:12:04.825588418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:12:04.825881 containerd[1618]: time="2026-01-14T01:12:04.825624415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:04.826211 kubelet[2817]: E0114 01:12:04.826154 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:04.826276 kubelet[2817]: E0114 01:12:04.826219 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:04.826569 kubelet[2817]: E0114 01:12:04.826355 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c8b989ab0e84e66925d2b86c9e93775,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wxjn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548b5c59b-jhbn9_calico-system(0a01ad60-6870-4247-94ac-0665fa604563): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:04.828935 containerd[1618]: time="2026-01-14T01:12:04.828812538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:12:04.929952 containerd[1618]: time="2026-01-14T01:12:04.929430669Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:04.931970 containerd[1618]: time="2026-01-14T01:12:04.931693318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:12:04.935949 containerd[1618]: time="2026-01-14T01:12:04.932434994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:04.937051 kubelet[2817]: E0114 01:12:04.936972 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:04.937203 kubelet[2817]: E0114 01:12:04.937104 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:04.937334 kubelet[2817]: E0114 01:12:04.937241 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxjn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-548b5c59b-jhbn9_calico-system(0a01ad60-6870-4247-94ac-0665fa604563): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:04.938756 kubelet[2817]: E0114 01:12:04.938684 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:12:06.556192 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:06.556353 kernel: audit: type=1130 audit(1768353126.540:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.105:22-10.0.0.1:50022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:06.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.105:22-10.0.0.1:50022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:06.540810 systemd[1]: Started sshd@26-10.0.0.105:22-10.0.0.1:50022.service - OpenSSH per-connection server daemon (10.0.0.1:50022). Jan 14 01:12:06.658000 audit[5616]: USER_ACCT pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:06.662115 sshd[5616]: Accepted publickey for core from 10.0.0.1 port 50022 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:06.667577 sshd-session[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:06.675019 kernel: audit: type=1101 audit(1768353126.658:905): pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:06.662000 audit[5616]: CRED_ACQ pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:06.694026 systemd-logind[1593]: New session 28 of user core. Jan 14 01:12:06.696136 kernel: audit: type=1103 audit(1768353126.662:906): pid=5616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:06.696754 kernel: audit: type=1006 audit(1768353126.662:907): pid=5616 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 01:12:06.696798 kernel: audit: type=1300 audit(1768353126.662:907): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc07722c70 a2=3 a3=0 items=0 ppid=1 pid=5616 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:06.662000 audit[5616]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc07722c70 a2=3 a3=0 items=0 ppid=1 pid=5616 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:06.710649 kernel: audit: type=1327 audit(1768353126.662:907): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:06.662000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:06.721699 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 01:12:06.731000 audit[5616]: USER_START pid=5616 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:06.747991 kernel: audit: type=1105 audit(1768353126.731:908): pid=5616 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:06.748521 kernel: audit: type=1103 audit(1768353126.736:909): pid=5621 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:06.736000 audit[5621]: CRED_ACQ pid=5621 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:06.925156 sshd[5621]: Connection closed by 10.0.0.1 port 50022 Jan 14 01:12:06.927180 sshd-session[5616]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:06.931000 audit[5616]: USER_END pid=5616 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:06.939356 systemd[1]: sshd@26-10.0.0.105:22-10.0.0.1:50022.service: Deactivated successfully. Jan 14 01:12:06.946947 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 01:12:06.948972 kernel: audit: type=1106 audit(1768353126.931:910): pid=5616 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:06.951575 systemd-logind[1593]: Session 28 logged out. Waiting for processes to exit. Jan 14 01:12:06.931000 audit[5616]: CRED_DISP pid=5616 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:06.956986 systemd-logind[1593]: Removed session 28. Jan 14 01:12:06.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.105:22-10.0.0.1:50022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:06.965492 kernel: audit: type=1104 audit(1768353126.931:911): pid=5616 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:08.734496 kubelet[2817]: E0114 01:12:08.733462 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:12:11.738226 kubelet[2817]: E0114 01:12:11.735404 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:12:11.743668 containerd[1618]: time="2026-01-14T01:12:11.743581071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:12:11.828636 containerd[1618]: time="2026-01-14T01:12:11.828494303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:11.832554 containerd[1618]: time="2026-01-14T01:12:11.832361503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:12:11.832810 containerd[1618]: time="2026-01-14T01:12:11.832476415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:11.833307 kubelet[2817]: E0114 01:12:11.832969 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:12:11.833307 kubelet[2817]: E0114 01:12:11.833026 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:12:11.833307 kubelet[2817]: E0114 01:12:11.833203 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g5n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:11.839602 containerd[1618]: time="2026-01-14T01:12:11.839500481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:12:11.912088 containerd[1618]: time="2026-01-14T01:12:11.911735273Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:11.914998 containerd[1618]: time="2026-01-14T01:12:11.914702649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:12:11.914998 containerd[1618]: time="2026-01-14T01:12:11.914802557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:11.915195 kubelet[2817]: E0114 01:12:11.915147 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:12:11.915255 kubelet[2817]: E0114 01:12:11.915215 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:12:11.916000 kubelet[2817]: E0114 01:12:11.915352 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g5n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q8jc6_calico-system(ba6f7f37-698f-4697-a408-a3efabbcf48e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:11.918184 kubelet[2817]: E0114 01:12:11.916884 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:12:11.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.105:22-10.0.0.1:50028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:11.947437 systemd[1]: Started sshd@27-10.0.0.105:22-10.0.0.1:50028.service - OpenSSH per-connection server daemon (10.0.0.1:50028). Jan 14 01:12:11.954178 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:11.954312 kernel: audit: type=1130 audit(1768353131.946:913): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.105:22-10.0.0.1:50028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:12.029000 audit[5666]: USER_ACCT pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.032013 sshd[5666]: Accepted publickey for core from 10.0.0.1 port 50028 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:12.035431 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:12.044398 kernel: audit: type=1101 audit(1768353132.029:914): pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.044521 kernel: audit: type=1103 audit(1768353132.032:915): pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.032000 audit[5666]: CRED_ACQ pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.065794 systemd-logind[1593]: New session 29 of user core. Jan 14 01:12:12.032000 audit[5666]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1b5a10d0 a2=3 a3=0 items=0 ppid=1 pid=5666 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:12.080353 kernel: audit: type=1006 audit(1768353132.032:916): pid=5666 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 14 01:12:12.080442 kernel: audit: type=1300 audit(1768353132.032:916): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1b5a10d0 a2=3 a3=0 items=0 ppid=1 pid=5666 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:12.032000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:12.085306 kernel: audit: type=1327 audit(1768353132.032:916): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:12.087281 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 01:12:12.092000 audit[5666]: USER_START pid=5666 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.108206 kernel: audit: type=1105 audit(1768353132.092:917): pid=5666 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.096000 audit[5675]: CRED_ACQ pid=5675 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.118992 kernel: audit: type=1103 audit(1768353132.096:918): pid=5675 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.214653 sshd[5675]: Connection closed by 10.0.0.1 port 50028 Jan 14 01:12:12.215053 sshd-session[5666]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:12.216000 audit[5666]: USER_END pid=5666 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.223459 systemd-logind[1593]: Session 29 logged out. Waiting for processes to exit. Jan 14 01:12:12.223826 systemd[1]: sshd@27-10.0.0.105:22-10.0.0.1:50028.service: Deactivated successfully. Jan 14 01:12:12.228221 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 01:12:12.216000 audit[5666]: CRED_DISP pid=5666 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.232165 systemd-logind[1593]: Removed session 29. Jan 14 01:12:12.242377 kernel: audit: type=1106 audit(1768353132.216:919): pid=5666 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.242456 kernel: audit: type=1104 audit(1768353132.216:920): pid=5666 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:12.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.105:22-10.0.0.1:50028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:12.733579 containerd[1618]: time="2026-01-14T01:12:12.733310686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:12:12.801577 containerd[1618]: time="2026-01-14T01:12:12.801526644Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:12.806688 containerd[1618]: time="2026-01-14T01:12:12.806540565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:12:12.806688 containerd[1618]: time="2026-01-14T01:12:12.806611823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:12.807017 kubelet[2817]: E0114 01:12:12.806818 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:12.807486 kubelet[2817]: E0114 01:12:12.807180 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:12.810293 kubelet[2817]: E0114 01:12:12.810036 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8x65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b64b7c788-f89f6_calico-apiserver(b1255f7d-606b-4b44-9160-25a609a72f97): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:12.811475 kubelet[2817]: E0114 01:12:12.811330 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:12:13.733008 kubelet[2817]: E0114 01:12:13.732801 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:12:15.734267 containerd[1618]: time="2026-01-14T01:12:15.734080282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:12:15.807604 containerd[1618]: time="2026-01-14T01:12:15.807502240Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:15.810041 containerd[1618]: time="2026-01-14T01:12:15.809790839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:12:15.810041 containerd[1618]: time="2026-01-14T01:12:15.810003457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:15.810575 kubelet[2817]: E0114 01:12:15.810463 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:15.810575 kubelet[2817]: E0114 01:12:15.810544 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:15.811176 kubelet[2817]: E0114 01:12:15.810675 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj2ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b64b7c788-2crrf_calico-apiserver(31330de6-3f41-4f44-bd94-776d84913764): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:15.812241 kubelet[2817]: E0114 01:12:15.812200 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:12:17.238308 systemd[1]: Started sshd@28-10.0.0.105:22-10.0.0.1:57184.service - OpenSSH per-connection server daemon (10.0.0.1:57184). Jan 14 01:12:17.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.105:22-10.0.0.1:57184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:17.243944 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:17.244021 kernel: audit: type=1130 audit(1768353137.237:922): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.105:22-10.0.0.1:57184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:17.345000 audit[5699]: USER_ACCT pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.347712 sshd[5699]: Accepted publickey for core from 10.0.0.1 port 57184 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:17.352850 sshd-session[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:17.349000 audit[5699]: CRED_ACQ pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.364257 systemd-logind[1593]: New session 30 of user core. Jan 14 01:12:17.376495 kernel: audit: type=1101 audit(1768353137.345:923): pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.376604 kernel: audit: type=1103 audit(1768353137.349:924): pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.376741 kernel: audit: type=1006 audit(1768353137.349:925): pid=5699 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 14 01:12:17.349000 audit[5699]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd27a90680 a2=3 a3=0 items=0 ppid=1 pid=5699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.396756 kernel: audit: type=1300 audit(1768353137.349:925): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd27a90680 a2=3 a3=0 items=0 ppid=1 pid=5699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.397205 kernel: audit: type=1327 audit(1768353137.349:925): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:17.349000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:17.404357 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 01:12:17.407000 audit[5699]: USER_START pid=5699 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.425931 kernel: audit: type=1105 audit(1768353137.407:926): pid=5699 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.426003 kernel: audit: type=1103 audit(1768353137.412:927): pid=5703 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.412000 audit[5703]: CRED_ACQ pid=5703 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.538025 sshd[5703]: Connection closed by 10.0.0.1 port 57184 Jan 14 01:12:17.539305 sshd-session[5699]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:17.540000 audit[5699]: USER_END pid=5699 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.545781 systemd[1]: sshd@28-10.0.0.105:22-10.0.0.1:57184.service: Deactivated successfully. Jan 14 01:12:17.549620 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 01:12:17.552498 systemd-logind[1593]: Session 30 logged out. Waiting for processes to exit. Jan 14 01:12:17.554849 systemd-logind[1593]: Removed session 30. Jan 14 01:12:17.540000 audit[5699]: CRED_DISP pid=5699 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.620854 kernel: audit: type=1106 audit(1768353137.540:928): pid=5699 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.621109 kernel: audit: type=1104 audit(1768353137.540:929): pid=5699 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:17.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.105:22-10.0.0.1:57184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:20.736639 kubelet[2817]: E0114 01:12:20.735620 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:12:21.734517 containerd[1618]: time="2026-01-14T01:12:21.734131044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:12:21.795025 containerd[1618]: time="2026-01-14T01:12:21.794800987Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:21.797405 containerd[1618]: time="2026-01-14T01:12:21.797262178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:12:21.797519 containerd[1618]: time="2026-01-14T01:12:21.797417860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:21.797785 kubelet[2817]: E0114 01:12:21.797624 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:21.797785 kubelet[2817]: E0114 01:12:21.797719 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:21.798537 kubelet[2817]: E0114 01:12:21.797967 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7lnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77fd4f6b7c-sb5qb_calico-system(f73e61d7-350a-471c-9476-00cd84fadf64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:21.799474 kubelet[2817]: E0114 01:12:21.799347 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:12:22.558413 systemd[1]: Started sshd@29-10.0.0.105:22-10.0.0.1:47992.service - OpenSSH per-connection server daemon (10.0.0.1:47992). Jan 14 01:12:22.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.105:22-10.0.0.1:47992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:22.562988 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:22.563083 kernel: audit: type=1130 audit(1768353142.557:931): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.105:22-10.0.0.1:47992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:22.679000 audit[5718]: USER_ACCT pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.680563 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 47992 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:22.685933 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:22.681000 audit[5718]: CRED_ACQ pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.701428 systemd-logind[1593]: New session 31 of user core. Jan 14 01:12:22.708695 kernel: audit: type=1101 audit(1768353142.679:932): pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.709018 kernel: audit: type=1103 audit(1768353142.681:933): pid=5718 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.709207 kernel: audit: type=1006 audit(1768353142.681:934): pid=5718 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 14 01:12:22.718152 kernel: audit: type=1300 audit(1768353142.681:934): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff78708fe0 a2=3 a3=0 items=0 ppid=1 pid=5718 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.681000 audit[5718]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff78708fe0 a2=3 a3=0 items=0 ppid=1 pid=5718 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:22.681000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:22.734703 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 14 01:12:22.738022 kernel: audit: type=1327 audit(1768353142.681:934): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:22.741000 audit[5718]: USER_START pid=5718 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.741000 audit[5722]: CRED_ACQ pid=5722 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.775819 kernel: audit: type=1105 audit(1768353142.741:935): pid=5718 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.776014 kernel: audit: type=1103 audit(1768353142.741:936): pid=5722 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.925263 sshd[5722]: Connection closed by 10.0.0.1 port 47992 Jan 14 01:12:22.926389 sshd-session[5718]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:22.928000 audit[5718]: USER_END pid=5718 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.928000 audit[5718]: CRED_DISP pid=5718 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.952064 systemd[1]: sshd@29-10.0.0.105:22-10.0.0.1:47992.service: Deactivated successfully. Jan 14 01:12:22.956661 systemd[1]: session-31.scope: Deactivated successfully. Jan 14 01:12:22.960269 kernel: audit: type=1106 audit(1768353142.928:937): pid=5718 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.960341 kernel: audit: type=1104 audit(1768353142.928:938): pid=5718 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:22.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.105:22-10.0.0.1:47992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:22.959584 systemd-logind[1593]: Session 31 logged out. Waiting for processes to exit. Jan 14 01:12:22.962505 systemd-logind[1593]: Removed session 31. Jan 14 01:12:23.733966 kubelet[2817]: E0114 01:12:23.732535 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:12:23.733966 kubelet[2817]: E0114 01:12:23.733403 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:12:24.738509 kubelet[2817]: E0114 01:12:24.738112 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:12:26.732184 kubelet[2817]: E0114 01:12:26.732130 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:12:26.737300 containerd[1618]: time="2026-01-14T01:12:26.736625613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:12:26.810495 containerd[1618]: time="2026-01-14T01:12:26.810327270Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:26.813842 containerd[1618]: time="2026-01-14T01:12:26.813646324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:12:26.813842 containerd[1618]: time="2026-01-14T01:12:26.813743075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:26.814306 kubelet[2817]: E0114 01:12:26.814081 2817 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:12:26.814306 kubelet[2817]: E0114 01:12:26.814183 2817 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:12:26.816117 kubelet[2817]: E0114 01:12:26.814398 2817 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zmvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8zxlx_calico-system(f6cd6e89-0e6a-47aa-ae56-5f40f27190c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:26.816117 kubelet[2817]: E0114 01:12:26.815596 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:12:27.942360 systemd[1]: Started sshd@30-10.0.0.105:22-10.0.0.1:48008.service - OpenSSH per-connection server daemon (10.0.0.1:48008). Jan 14 01:12:27.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.105:22-10.0.0.1:48008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:27.945997 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:27.946512 kernel: audit: type=1130 audit(1768353147.941:940): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.105:22-10.0.0.1:48008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:28.105000 audit[5750]: USER_ACCT pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.106780 sshd[5750]: Accepted publickey for core from 10.0.0.1 port 48008 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:28.110764 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:28.107000 audit[5750]: CRED_ACQ pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.118975 kernel: audit: type=1101 audit(1768353148.105:941): pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.119055 kernel: audit: type=1103 audit(1768353148.107:942): pid=5750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.122114 systemd-logind[1593]: New session 32 of user core. Jan 14 01:12:28.135533 kernel: audit: type=1006 audit(1768353148.107:943): pid=5750 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 14 01:12:28.107000 audit[5750]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb5b406d0 a2=3 a3=0 items=0 ppid=1 pid=5750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:28.136432 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 14 01:12:28.146703 kernel: audit: type=1300 audit(1768353148.107:943): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb5b406d0 a2=3 a3=0 items=0 ppid=1 pid=5750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:28.107000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:28.146000 audit[5750]: USER_START pid=5750 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.166978 kernel: audit: type=1327 audit(1768353148.107:943): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:28.167109 kernel: audit: type=1105 audit(1768353148.146:944): pid=5750 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.167162 kernel: audit: type=1103 audit(1768353148.148:945): pid=5754 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.148000 audit[5754]: CRED_ACQ pid=5754 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.294528 sshd[5754]: Connection closed by 10.0.0.1 port 48008 Jan 14 01:12:28.295663 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:28.297000 audit[5750]: USER_END pid=5750 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.312955 kernel: audit: type=1106 audit(1768353148.297:946): pid=5750 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.310691 systemd[1]: Started sshd@31-10.0.0.105:22-10.0.0.1:48010.service - OpenSSH per-connection server daemon (10.0.0.1:48010). Jan 14 01:12:28.297000 audit[5750]: CRED_DISP pid=5750 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.318323 systemd-logind[1593]: Session 32 logged out. Waiting for processes to exit. Jan 14 01:12:28.319479 systemd[1]: sshd@30-10.0.0.105:22-10.0.0.1:48008.service: Deactivated successfully. Jan 14 01:12:28.327570 systemd[1]: session-32.scope: Deactivated successfully. Jan 14 01:12:28.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.105:22-10.0.0.1:48010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:28.328071 kernel: audit: type=1104 audit(1768353148.297:947): pid=5750 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.105:22-10.0.0.1:48008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:28.333340 systemd-logind[1593]: Removed session 32. Jan 14 01:12:28.394443 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 48010 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:28.393000 audit[5765]: USER_ACCT pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.396000 audit[5765]: CRED_ACQ pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.396000 audit[5765]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce482af80 a2=3 a3=0 items=0 ppid=1 pid=5765 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:28.396000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:28.399335 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:28.409449 systemd-logind[1593]: New session 33 of user core. Jan 14 01:12:28.417250 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 14 01:12:28.422000 audit[5765]: USER_START pid=5765 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.425000 audit[5773]: CRED_ACQ pid=5773 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.735977 kubelet[2817]: E0114 01:12:28.735761 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:12:28.860660 sshd[5773]: Connection closed by 10.0.0.1 port 48010 Jan 14 01:12:28.861985 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:28.864000 audit[5765]: USER_END pid=5765 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.865000 audit[5765]: CRED_DISP pid=5765 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:28.879450 systemd[1]: Started sshd@32-10.0.0.105:22-10.0.0.1:48012.service - OpenSSH per-connection server daemon (10.0.0.1:48012). Jan 14 01:12:28.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.105:22-10.0.0.1:48012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:28.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.105:22-10.0.0.1:48010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:28.881485 systemd[1]: sshd@31-10.0.0.105:22-10.0.0.1:48010.service: Deactivated successfully. Jan 14 01:12:28.888296 systemd[1]: session-33.scope: Deactivated successfully. Jan 14 01:12:28.892857 systemd-logind[1593]: Session 33 logged out. Waiting for processes to exit. Jan 14 01:12:28.895573 systemd-logind[1593]: Removed session 33. Jan 14 01:12:29.033000 audit[5782]: USER_ACCT pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:29.035840 sshd[5782]: Accepted publickey for core from 10.0.0.1 port 48012 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:29.035000 audit[5782]: CRED_ACQ pid=5782 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:29.035000 audit[5782]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff33132c00 a2=3 a3=0 items=0 ppid=1 pid=5782 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:29.035000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:29.037681 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:29.046472 systemd-logind[1593]: New session 34 of user core. Jan 14 01:12:29.050197 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 14 01:12:29.053000 audit[5782]: USER_START pid=5782 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:29.056000 audit[5790]: CRED_ACQ pid=5790 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:29.803000 audit[5802]: NETFILTER_CFG table=filter:138 family=2 entries=26 op=nft_register_rule pid=5802 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:29.803000 audit[5802]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffeb674a690 a2=0 a3=7ffeb674a67c items=0 ppid=2971 pid=5802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:29.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:29.811000 audit[5802]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5802 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:29.811000 audit[5802]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeb674a690 a2=0 a3=0 items=0 ppid=2971 pid=5802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:29.811000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:29.816335 sshd[5790]: Connection closed by 10.0.0.1 port 48012 Jan 14 01:12:29.817155 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:29.818000 audit[5782]: USER_END pid=5782 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:29.818000 audit[5782]: CRED_DISP pid=5782 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:29.828704 systemd[1]: sshd@32-10.0.0.105:22-10.0.0.1:48012.service: Deactivated successfully. Jan 14 01:12:29.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.105:22-10.0.0.1:48012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:29.832500 systemd[1]: session-34.scope: Deactivated successfully. Jan 14 01:12:29.835315 systemd-logind[1593]: Session 34 logged out. Waiting for processes to exit. Jan 14 01:12:29.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.105:22-10.0.0.1:48018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:29.842440 systemd[1]: Started sshd@33-10.0.0.105:22-10.0.0.1:48018.service - OpenSSH per-connection server daemon (10.0.0.1:48018). Jan 14 01:12:29.844764 systemd-logind[1593]: Removed session 34. Jan 14 01:12:29.849000 audit[5809]: NETFILTER_CFG table=filter:140 family=2 entries=38 op=nft_register_rule pid=5809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:29.849000 audit[5809]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffd45d8bf0 a2=0 a3=7fffd45d8bdc items=0 ppid=2971 pid=5809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:29.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:29.860000 audit[5809]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:29.860000 audit[5809]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffd45d8bf0 a2=0 a3=0 items=0 ppid=2971 pid=5809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:29.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:29.931000 audit[5808]: USER_ACCT pid=5808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:29.932710 sshd[5808]: Accepted publickey for core from 10.0.0.1 port 48018 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:29.932000 audit[5808]: CRED_ACQ pid=5808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:29.932000 audit[5808]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcea6a7200 a2=3 a3=0 items=0 ppid=1 pid=5808 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:29.932000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:29.935275 sshd-session[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:29.944066 systemd-logind[1593]: New session 35 of user core. Jan 14 01:12:29.954195 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 14 01:12:29.958000 audit[5808]: USER_START pid=5808 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:29.962000 audit[5813]: CRED_ACQ pid=5813 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:30.258395 sshd[5813]: Connection closed by 10.0.0.1 port 48018 Jan 14 01:12:30.259110 sshd-session[5808]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:30.263000 audit[5808]: USER_END pid=5808 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:30.263000 audit[5808]: CRED_DISP pid=5808 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:30.271980 systemd[1]: sshd@33-10.0.0.105:22-10.0.0.1:48018.service: Deactivated successfully. Jan 14 01:12:30.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.105:22-10.0.0.1:48018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:30.274804 systemd[1]: session-35.scope: Deactivated successfully. Jan 14 01:12:30.278162 systemd-logind[1593]: Session 35 logged out. Waiting for processes to exit. Jan 14 01:12:30.281429 systemd[1]: Started sshd@34-10.0.0.105:22-10.0.0.1:48024.service - OpenSSH per-connection server daemon (10.0.0.1:48024). Jan 14 01:12:30.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.105:22-10.0.0.1:48024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:30.284049 systemd-logind[1593]: Removed session 35. Jan 14 01:12:30.359000 audit[5824]: USER_ACCT pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:30.360773 sshd[5824]: Accepted publickey for core from 10.0.0.1 port 48024 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:30.362000 audit[5824]: CRED_ACQ pid=5824 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:30.362000 audit[5824]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdca2e5e70 a2=3 a3=0 items=0 ppid=1 pid=5824 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:30.362000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:30.365021 sshd-session[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:30.376293 systemd-logind[1593]: New session 36 of user core. Jan 14 01:12:30.383168 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 14 01:12:30.390000 audit[5824]: USER_START pid=5824 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:30.395000 audit[5828]: CRED_ACQ pid=5828 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:30.534778 sshd[5828]: Connection closed by 10.0.0.1 port 48024 Jan 14 01:12:30.535280 sshd-session[5824]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:30.536000 audit[5824]: USER_END pid=5824 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:30.536000 audit[5824]: CRED_DISP pid=5824 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:30.542034 systemd[1]: sshd@34-10.0.0.105:22-10.0.0.1:48024.service: Deactivated successfully. Jan 14 01:12:30.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.105:22-10.0.0.1:48024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:30.546719 systemd[1]: session-36.scope: Deactivated successfully. Jan 14 01:12:30.552393 systemd-logind[1593]: Session 36 logged out. Waiting for processes to exit. Jan 14 01:12:30.554277 systemd-logind[1593]: Removed session 36. Jan 14 01:12:31.738308 kubelet[2817]: E0114 01:12:31.738158 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:12:34.733417 kubelet[2817]: E0114 01:12:34.733372 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:12:34.735208 kubelet[2817]: E0114 01:12:34.735099 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:12:35.558546 systemd[1]: Started sshd@35-10.0.0.105:22-10.0.0.1:49802.service - OpenSSH per-connection server daemon (10.0.0.1:49802). Jan 14 01:12:35.576186 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 14 01:12:35.576372 kernel: audit: type=1130 audit(1768353155.558:989): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.105:22-10.0.0.1:49802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:35.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.105:22-10.0.0.1:49802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:35.688000 audit[5842]: USER_ACCT pid=5842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.709631 systemd-logind[1593]: New session 37 of user core. Jan 14 01:12:35.710778 kernel: audit: type=1101 audit(1768353155.688:990): pid=5842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.694497 sshd-session[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:35.711235 sshd[5842]: Accepted publickey for core from 10.0.0.1 port 49802 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:35.691000 audit[5842]: CRED_ACQ pid=5842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.743315 kernel: audit: type=1103 audit(1768353155.691:991): pid=5842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.743424 kernel: audit: type=1006 audit(1768353155.691:992): pid=5842 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 14 01:12:35.743457 kubelet[2817]: E0114 01:12:35.740780 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:12:35.691000 audit[5842]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd329065b0 a2=3 a3=0 items=0 ppid=1 pid=5842 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:35.744340 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 14 01:12:35.758215 kernel: audit: type=1300 audit(1768353155.691:992): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd329065b0 a2=3 a3=0 items=0 ppid=1 pid=5842 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:35.691000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:35.776202 kernel: audit: type=1327 audit(1768353155.691:992): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:35.776365 kernel: audit: type=1105 audit(1768353155.751:993): pid=5842 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.751000 audit[5842]: USER_START pid=5842 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.786942 kernel: audit: type=1103 audit(1768353155.757:994): pid=5846 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.757000 audit[5846]: CRED_ACQ pid=5846 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.912604 sshd[5846]: Connection closed by 10.0.0.1 port 49802 Jan 14 01:12:35.913722 sshd-session[5842]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:35.916000 audit[5842]: USER_END pid=5842 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.922877 systemd-logind[1593]: Session 37 logged out. Waiting for processes to exit. Jan 14 01:12:35.924639 systemd[1]: sshd@35-10.0.0.105:22-10.0.0.1:49802.service: Deactivated successfully. Jan 14 01:12:35.928375 systemd[1]: session-37.scope: Deactivated successfully. Jan 14 01:12:35.933003 kernel: audit: type=1106 audit(1768353155.916:995): pid=5842 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.933496 systemd-logind[1593]: Removed session 37. Jan 14 01:12:35.916000 audit[5842]: CRED_DISP pid=5842 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.949970 kernel: audit: type=1104 audit(1768353155.916:996): pid=5842 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:35.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.105:22-10.0.0.1:49802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:40.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.105:22-10.0.0.1:49818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:40.925860 systemd[1]: Started sshd@36-10.0.0.105:22-10.0.0.1:49818.service - OpenSSH per-connection server daemon (10.0.0.1:49818). Jan 14 01:12:40.930430 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:40.930491 kernel: audit: type=1130 audit(1768353160.925:998): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.105:22-10.0.0.1:49818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:41.017000 audit[5860]: USER_ACCT pid=5860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.022506 sshd[5860]: Accepted publickey for core from 10.0.0.1 port 49818 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:41.026047 sshd-session[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:41.023000 audit[5860]: CRED_ACQ pid=5860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.035505 systemd-logind[1593]: New session 38 of user core. Jan 14 01:12:41.041417 kernel: audit: type=1101 audit(1768353161.017:999): pid=5860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.041484 kernel: audit: type=1103 audit(1768353161.023:1000): pid=5860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.041524 kernel: audit: type=1006 audit(1768353161.023:1001): pid=5860 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 14 01:12:41.048166 kernel: audit: type=1300 audit(1768353161.023:1001): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffe6d0e00 a2=3 a3=0 items=0 ppid=1 pid=5860 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.023000 audit[5860]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffe6d0e00 a2=3 a3=0 items=0 ppid=1 pid=5860 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.059705 kernel: audit: type=1327 audit(1768353161.023:1001): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:41.023000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:41.066307 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 14 01:12:41.078000 audit[5860]: USER_START pid=5860 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.096960 kernel: audit: type=1105 audit(1768353161.078:1002): pid=5860 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.082000 audit[5864]: CRED_ACQ pid=5864 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.112001 kernel: audit: type=1103 audit(1768353161.082:1003): pid=5864 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.238562 sshd[5864]: Connection closed by 10.0.0.1 port 49818 Jan 14 01:12:41.240212 sshd-session[5860]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:41.241000 audit[5860]: USER_END pid=5860 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.259995 kernel: audit: type=1106 audit(1768353161.241:1004): pid=5860 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.243000 audit[5860]: CRED_DISP pid=5860 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.264334 systemd[1]: sshd@36-10.0.0.105:22-10.0.0.1:49818.service: Deactivated successfully. Jan 14 01:12:41.267630 systemd[1]: session-38.scope: Deactivated successfully. Jan 14 01:12:41.270403 systemd-logind[1593]: Session 38 logged out. Waiting for processes to exit. Jan 14 01:12:41.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.105:22-10.0.0.1:49818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:41.272154 kernel: audit: type=1104 audit(1768353161.243:1005): pid=5860 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:41.272777 systemd-logind[1593]: Removed session 38. Jan 14 01:12:41.736573 kubelet[2817]: E0114 01:12:41.736039 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:12:42.733241 kubelet[2817]: E0114 01:12:42.733190 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:12:42.735965 kubelet[2817]: E0114 01:12:42.735371 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:12:43.732846 kubelet[2817]: E0114 01:12:43.732203 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:12:45.536000 audit[5907]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:45.536000 audit[5907]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffef910e990 a2=0 a3=7ffef910e97c items=0 ppid=2971 pid=5907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:45.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:45.544000 audit[5907]: NETFILTER_CFG table=nat:143 family=2 entries=104 op=nft_register_chain pid=5907 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:45.544000 audit[5907]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffef910e990 a2=0 a3=7ffef910e97c items=0 ppid=2971 pid=5907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:45.544000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:46.261407 systemd[1]: Started sshd@37-10.0.0.105:22-10.0.0.1:38562.service - OpenSSH per-connection server daemon (10.0.0.1:38562). Jan 14 01:12:46.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.105:22-10.0.0.1:38562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:46.274147 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:12:46.274287 kernel: audit: type=1130 audit(1768353166.260:1009): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.105:22-10.0.0.1:38562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:46.377000 audit[5909]: USER_ACCT pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.379235 sshd[5909]: Accepted publickey for core from 10.0.0.1 port 38562 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:46.382568 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:46.379000 audit[5909]: CRED_ACQ pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.393868 systemd-logind[1593]: New session 39 of user core. Jan 14 01:12:46.408040 kernel: audit: type=1101 audit(1768353166.377:1010): pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.408118 kernel: audit: type=1103 audit(1768353166.379:1011): pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.417244 kernel: audit: type=1006 audit(1768353166.379:1012): pid=5909 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=39 res=1 Jan 14 01:12:46.379000 audit[5909]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd30461970 a2=3 a3=0 items=0 ppid=1 pid=5909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:46.418267 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 14 01:12:46.434154 kernel: audit: type=1300 audit(1768353166.379:1012): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd30461970 a2=3 a3=0 items=0 ppid=1 pid=5909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:46.379000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:46.441056 kernel: audit: type=1327 audit(1768353166.379:1012): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:46.429000 audit[5909]: USER_START pid=5909 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.461005 kernel: audit: type=1105 audit(1768353166.429:1013): pid=5909 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.461128 kernel: audit: type=1103 audit(1768353166.434:1014): pid=5914 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.434000 audit[5914]: CRED_ACQ pid=5914 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.630863 sshd[5914]: Connection closed by 10.0.0.1 port 38562 Jan 14 01:12:46.635178 sshd-session[5909]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:46.637000 audit[5909]: USER_END pid=5909 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.642712 systemd-logind[1593]: Session 39 logged out. Waiting for processes to exit. Jan 14 01:12:46.644252 systemd[1]: sshd@37-10.0.0.105:22-10.0.0.1:38562.service: Deactivated successfully. Jan 14 01:12:46.654455 systemd[1]: session-39.scope: Deactivated successfully. Jan 14 01:12:46.638000 audit[5909]: CRED_DISP pid=5909 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.664764 systemd-logind[1593]: Removed session 39. Jan 14 01:12:46.667845 kernel: audit: type=1106 audit(1768353166.637:1015): pid=5909 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.668005 kernel: audit: type=1104 audit(1768353166.638:1016): pid=5909 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:46.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.105:22-10.0.0.1:38562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:47.735609 kubelet[2817]: E0114 01:12:47.735540 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:12:48.732524 kubelet[2817]: E0114 01:12:48.732414 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:12:48.732700 kubelet[2817]: E0114 01:12:48.732576 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:12:49.735263 kubelet[2817]: E0114 01:12:49.735201 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:12:50.737315 kubelet[2817]: E0114 01:12:50.737067 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:12:51.655876 systemd[1]: Started sshd@38-10.0.0.105:22-10.0.0.1:38572.service - OpenSSH per-connection server daemon (10.0.0.1:38572). Jan 14 01:12:51.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.105:22-10.0.0.1:38572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:51.660450 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:51.660854 kernel: audit: type=1130 audit(1768353171.655:1018): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.105:22-10.0.0.1:38572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:51.771000 audit[5927]: USER_ACCT pid=5927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:51.774578 sshd[5927]: Accepted publickey for core from 10.0.0.1 port 38572 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:51.775570 sshd-session[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:51.786318 systemd-logind[1593]: New session 40 of user core. Jan 14 01:12:51.773000 audit[5927]: CRED_ACQ pid=5927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:51.801435 kernel: audit: type=1101 audit(1768353171.771:1019): pid=5927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:51.801534 kernel: audit: type=1103 audit(1768353171.773:1020): pid=5927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:51.805541 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 14 01:12:51.773000 audit[5927]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff03ad1fd0 a2=3 a3=0 items=0 ppid=1 pid=5927 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:51.841458 kernel: audit: type=1006 audit(1768353171.773:1021): pid=5927 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=40 res=1 Jan 14 01:12:51.841566 kernel: audit: type=1300 audit(1768353171.773:1021): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff03ad1fd0 a2=3 a3=0 items=0 ppid=1 pid=5927 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:51.842998 kernel: audit: type=1327 audit(1768353171.773:1021): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:51.773000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:51.817000 audit[5927]: USER_START pid=5927 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:51.865107 kernel: audit: type=1105 audit(1768353171.817:1022): pid=5927 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:51.837000 audit[5931]: CRED_ACQ pid=5931 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:51.875711 kernel: audit: type=1103 audit(1768353171.837:1023): pid=5931 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:51.979448 sshd[5931]: Connection closed by 10.0.0.1 port 38572 Jan 14 01:12:51.981757 sshd-session[5927]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:51.984000 audit[5927]: USER_END pid=5927 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:51.989532 systemd[1]: sshd@38-10.0.0.105:22-10.0.0.1:38572.service: Deactivated successfully. Jan 14 01:12:51.993816 systemd[1]: session-40.scope: Deactivated successfully. Jan 14 01:12:51.995497 systemd-logind[1593]: Session 40 logged out. Waiting for processes to exit. Jan 14 01:12:51.998661 systemd-logind[1593]: Removed session 40. Jan 14 01:12:52.003631 kernel: audit: type=1106 audit(1768353171.984:1024): pid=5927 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:52.003693 kernel: audit: type=1104 audit(1768353171.984:1025): pid=5927 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:51.984000 audit[5927]: CRED_DISP pid=5927 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:51.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.105:22-10.0.0.1:38572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:54.734646 kubelet[2817]: E0114 01:12:54.734270 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:12:56.740993 kubelet[2817]: E0114 01:12:56.738100 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764" Jan 14 01:12:56.742753 kubelet[2817]: E0114 01:12:56.738101 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:12:56.996166 systemd[1]: Started sshd@39-10.0.0.105:22-10.0.0.1:60040.service - OpenSSH per-connection server daemon (10.0.0.1:60040). Jan 14 01:12:56.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.105:22-10.0.0.1:60040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:57.001106 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:12:57.001156 kernel: audit: type=1130 audit(1768353176.995:1027): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.105:22-10.0.0.1:60040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:57.102000 audit[5945]: USER_ACCT pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.105124 sshd[5945]: Accepted publickey for core from 10.0.0.1 port 60040 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:12:57.108762 sshd-session[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:12:57.106000 audit[5945]: CRED_ACQ pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.130210 systemd-logind[1593]: New session 41 of user core. Jan 14 01:12:57.136268 kernel: audit: type=1101 audit(1768353177.102:1028): pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.136341 kernel: audit: type=1103 audit(1768353177.106:1029): pid=5945 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.136436 kernel: audit: type=1006 audit(1768353177.106:1030): pid=5945 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=41 res=1 Jan 14 01:12:57.106000 audit[5945]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3414d040 a2=3 a3=0 items=0 ppid=1 pid=5945 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:57.157632 kernel: audit: type=1300 audit(1768353177.106:1030): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3414d040 a2=3 a3=0 items=0 ppid=1 pid=5945 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:57.157710 kernel: audit: type=1327 audit(1768353177.106:1030): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:57.106000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:12:57.172768 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 14 01:12:57.182000 audit[5945]: USER_START pid=5945 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.187000 audit[5949]: CRED_ACQ pid=5949 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.232569 kernel: audit: type=1105 audit(1768353177.182:1031): pid=5945 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.232690 kernel: audit: type=1103 audit(1768353177.187:1032): pid=5949 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.364190 sshd[5949]: Connection closed by 10.0.0.1 port 60040 Jan 14 01:12:57.364177 sshd-session[5945]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:57.371000 audit[5945]: USER_END pid=5945 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.378269 systemd-logind[1593]: Session 41 logged out. Waiting for processes to exit. Jan 14 01:12:57.381056 systemd[1]: sshd@39-10.0.0.105:22-10.0.0.1:60040.service: Deactivated successfully. Jan 14 01:12:57.389447 systemd[1]: session-41.scope: Deactivated successfully. Jan 14 01:12:57.371000 audit[5945]: CRED_DISP pid=5945 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.397446 systemd-logind[1593]: Removed session 41. Jan 14 01:12:57.411710 kernel: audit: type=1106 audit(1768353177.371:1033): pid=5945 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.411831 kernel: audit: type=1104 audit(1768353177.371:1034): pid=5945 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:12:57.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.105:22-10.0.0.1:60040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:58.743530 kubelet[2817]: E0114 01:12:58.743463 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fd4f6b7c-sb5qb" podUID="f73e61d7-350a-471c-9476-00cd84fadf64" Jan 14 01:13:01.527966 containerd[1618]: time="2026-01-14T01:13:01.521074041Z" level=info msg="container event discarded" container=c254fc705ac8c4b711ba3a0921d3c4ba9d06666323ac597ff49721f82692871a type=CONTAINER_CREATED_EVENT Jan 14 01:13:01.527966 containerd[1618]: time="2026-01-14T01:13:01.526133858Z" level=info msg="container event discarded" container=c254fc705ac8c4b711ba3a0921d3c4ba9d06666323ac597ff49721f82692871a type=CONTAINER_STARTED_EVENT Jan 14 01:13:01.577212 containerd[1618]: time="2026-01-14T01:13:01.576851664Z" level=info msg="container event discarded" container=f1200b1e2c0b61d39a94f7f177458ea038955f66ed0b2d3b3ef9c5b66996f8c1 type=CONTAINER_CREATED_EVENT Jan 14 01:13:01.706535 containerd[1618]: time="2026-01-14T01:13:01.706385656Z" level=info msg="container event discarded" container=9bf5610b529f5054ad6735da0f2845d37a4e6c049eec654218e360587e0da705 type=CONTAINER_CREATED_EVENT Jan 14 01:13:01.706724 containerd[1618]: time="2026-01-14T01:13:01.706702680Z" level=info msg="container event discarded" container=4c9e6761e04feae31217b8f9974aa1dfd66eba6a8b346293866c0ce6e8ae2960 type=CONTAINER_CREATED_EVENT Jan 14 01:13:01.706819 containerd[1618]: time="2026-01-14T01:13:01.706803267Z" level=info msg="container event discarded" container=4c9e6761e04feae31217b8f9974aa1dfd66eba6a8b346293866c0ce6e8ae2960 type=CONTAINER_STARTED_EVENT Jan 14 01:13:01.706983 containerd[1618]: time="2026-01-14T01:13:01.706879639Z" level=info msg="container event discarded" container=9bf5610b529f5054ad6735da0f2845d37a4e6c049eec654218e360587e0da705 type=CONTAINER_STARTED_EVENT Jan 14 01:13:01.732495 containerd[1618]: time="2026-01-14T01:13:01.732206273Z" level=info msg="container event discarded" container=f1200b1e2c0b61d39a94f7f177458ea038955f66ed0b2d3b3ef9c5b66996f8c1 type=CONTAINER_STARTED_EVENT Jan 14 01:13:01.755960 containerd[1618]: time="2026-01-14T01:13:01.754879507Z" level=info msg="container event discarded" container=a76e73e0ef653d0e4f11f52e92ea7999ffed2fbf306c589aceb2e110bec1c81d type=CONTAINER_CREATED_EVENT Jan 14 01:13:01.766658 containerd[1618]: time="2026-01-14T01:13:01.766541444Z" level=info msg="container event discarded" container=d5cf5b918bf6c6e3d5d03ffe8b1d2344c6a39237e2c387abc5acb766e6eef455 type=CONTAINER_CREATED_EVENT Jan 14 01:13:01.903772 containerd[1618]: time="2026-01-14T01:13:01.903550287Z" level=info msg="container event discarded" container=a76e73e0ef653d0e4f11f52e92ea7999ffed2fbf306c589aceb2e110bec1c81d type=CONTAINER_STARTED_EVENT Jan 14 01:13:01.931760 containerd[1618]: time="2026-01-14T01:13:01.930381939Z" level=info msg="container event discarded" container=d5cf5b918bf6c6e3d5d03ffe8b1d2344c6a39237e2c387abc5acb766e6eef455 type=CONTAINER_STARTED_EVENT Jan 14 01:13:02.394630 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:13:02.394723 kernel: audit: type=1130 audit(1768353182.389:1036): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.105:22-10.0.0.1:46436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:02.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.105:22-10.0.0.1:46436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:02.390135 systemd[1]: Started sshd@40-10.0.0.105:22-10.0.0.1:46436.service - OpenSSH per-connection server daemon (10.0.0.1:46436). Jan 14 01:13:02.573000 audit[5962]: USER_ACCT pid=5962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.575628 sshd[5962]: Accepted publickey for core from 10.0.0.1 port 46436 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:13:02.578195 sshd-session[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:02.587670 systemd-logind[1593]: New session 42 of user core. Jan 14 01:13:02.575000 audit[5962]: CRED_ACQ pid=5962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.613689 kernel: audit: type=1101 audit(1768353182.573:1037): pid=5962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.613777 kernel: audit: type=1103 audit(1768353182.575:1038): pid=5962 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.614039 kernel: audit: type=1006 audit(1768353182.575:1039): pid=5962 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=42 res=1 Jan 14 01:13:02.575000 audit[5962]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff88517f50 a2=3 a3=0 items=0 ppid=1 pid=5962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:02.640346 kernel: audit: type=1300 audit(1768353182.575:1039): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff88517f50 a2=3 a3=0 items=0 ppid=1 pid=5962 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:02.640534 kernel: audit: type=1327 audit(1768353182.575:1039): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:02.575000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:02.641682 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 14 01:13:02.649000 audit[5962]: USER_START pid=5962 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.653000 audit[5966]: CRED_ACQ pid=5966 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.688032 kernel: audit: type=1105 audit(1768353182.649:1040): pid=5962 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.688222 kernel: audit: type=1103 audit(1768353182.653:1041): pid=5966 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.887611 sshd[5966]: Connection closed by 10.0.0.1 port 46436 Jan 14 01:13:02.886321 sshd-session[5962]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:02.898000 audit[5962]: USER_END pid=5962 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.904672 systemd[1]: sshd@40-10.0.0.105:22-10.0.0.1:46436.service: Deactivated successfully. Jan 14 01:13:02.914366 systemd[1]: session-42.scope: Deactivated successfully. Jan 14 01:13:02.926866 systemd-logind[1593]: Session 42 logged out. Waiting for processes to exit. Jan 14 01:13:02.933313 systemd-logind[1593]: Removed session 42. Jan 14 01:13:02.940553 kernel: audit: type=1106 audit(1768353182.898:1042): pid=5962 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.940657 kernel: audit: type=1104 audit(1768353182.898:1043): pid=5962 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.898000 audit[5962]: CRED_DISP pid=5962 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:02.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.105:22-10.0.0.1:46436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:03.735519 kubelet[2817]: E0114 01:13:03.735315 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-f89f6" podUID="b1255f7d-606b-4b44-9160-25a609a72f97" Jan 14 01:13:05.743328 kubelet[2817]: E0114 01:13:05.743249 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q8jc6" podUID="ba6f7f37-698f-4697-a408-a3efabbcf48e" Jan 14 01:13:06.734583 kubelet[2817]: E0114 01:13:06.733863 2817 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 14 01:13:06.738545 kubelet[2817]: E0114 01:13:06.738404 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-548b5c59b-jhbn9" podUID="0a01ad60-6870-4247-94ac-0665fa604563" Jan 14 01:13:07.737826 kubelet[2817]: E0114 01:13:07.737689 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8zxlx" podUID="f6cd6e89-0e6a-47aa-ae56-5f40f27190c0" Jan 14 01:13:07.940110 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:13:07.940214 kernel: audit: type=1130 audit(1768353187.919:1045): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.105:22-10.0.0.1:46442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:07.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.105:22-10.0.0.1:46442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:07.920339 systemd[1]: Started sshd@41-10.0.0.105:22-10.0.0.1:46442.service - OpenSSH per-connection server daemon (10.0.0.1:46442). Jan 14 01:13:08.068000 audit[5983]: USER_ACCT pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.074654 sshd[5983]: Accepted publickey for core from 10.0.0.1 port 46442 ssh2: RSA SHA256:d1jN1F08JQsTOvdMdD0LaCDvaO7OhzXIMvAp7mCRWmI Jan 14 01:13:08.085096 sshd-session[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:08.081000 audit[5983]: CRED_ACQ pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.097754 systemd-logind[1593]: New session 43 of user core. Jan 14 01:13:08.111360 kernel: audit: type=1101 audit(1768353188.068:1046): pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.111543 kernel: audit: type=1103 audit(1768353188.081:1047): pid=5983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.111590 kernel: audit: type=1006 audit(1768353188.081:1048): pid=5983 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Jan 14 01:13:08.081000 audit[5983]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe175c5920 a2=3 a3=0 items=0 ppid=1 pid=5983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:08.147143 kernel: audit: type=1300 audit(1768353188.081:1048): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe175c5920 a2=3 a3=0 items=0 ppid=1 pid=5983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:08.148367 kernel: audit: type=1327 audit(1768353188.081:1048): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:08.081000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:08.159977 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 14 01:13:08.173000 audit[5983]: USER_START pid=5983 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.179000 audit[5987]: CRED_ACQ pid=5987 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.207577 kernel: audit: type=1105 audit(1768353188.173:1049): pid=5983 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.207673 kernel: audit: type=1103 audit(1768353188.179:1050): pid=5987 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.383007 sshd[5987]: Connection closed by 10.0.0.1 port 46442 Jan 14 01:13:08.382950 sshd-session[5983]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:08.385000 audit[5983]: USER_END pid=5983 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.391551 systemd-logind[1593]: Session 43 logged out. Waiting for processes to exit. Jan 14 01:13:08.392738 systemd[1]: sshd@41-10.0.0.105:22-10.0.0.1:46442.service: Deactivated successfully. Jan 14 01:13:08.396389 systemd[1]: session-43.scope: Deactivated successfully. Jan 14 01:13:08.402566 systemd-logind[1593]: Removed session 43. Jan 14 01:13:08.386000 audit[5983]: CRED_DISP pid=5983 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.415664 kernel: audit: type=1106 audit(1768353188.385:1051): pid=5983 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.415778 kernel: audit: type=1104 audit(1768353188.386:1052): pid=5983 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 14 01:13:08.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.105:22-10.0.0.1:46442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:08.733875 kubelet[2817]: E0114 01:13:08.733551 2817 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b64b7c788-2crrf" podUID="31330de6-3f41-4f44-bd94-776d84913764"